scispace - formally typeset
Search or ask a question
Author

Kenny Q. Ye

Bio: Kenny Q. Ye is an academic researcher from Stony Brook University. The author has contributed to research in topics: Fractional factorial design & Factorial experiment. The author has an hindex of 16, co-authored 26 publications receiving 4046 citations. Previous affiliations of Kenny Q. Ye include State University of New York System.

Papers
More filters
Journal ArticleDOI
23 Jul 2004-Science
TL;DR: It is shown that large-scale copy number polymorphisms (CNPs) (about 100 kilobases and greater) contribute substantially to genomic variation between normal humans.
Abstract: The extent to which large duplications and deletions contribute to human genetic variation and diversity is unknown. Here, we show that large-scale copy number polymorphisms (CNPs) (about 100 kilobases and greater) contribute substantially to genomic variation between normal humans. Representational oligonucleotide microarray analysis of 20 individuals revealed a total of 221 copy number differences representing 76 unique CNPs. On average, individuals differed by 11 CNPs, and the average length of a CNP interval was 465 kilobases. We observed copy number variation of 70 different genes within CNP intervals, including genes involved in neurological function, regulation of cell growth, regulation of metabolism, and several genes known to be associated with disease.

2,572 citations

Journal ArticleDOI
TL;DR: The goal is to offer a compromise between computing effort and design optimality, which has some advantages over the regular Latin hypercube design with respect to criteria such as entropy and the minimum intersite distance.

427 citations

Journal ArticleDOI
TL;DR: A class of orthogonal Latin hypercubes that preserves orthogonality among columns is proposed, and can facilitate nonparametric fitting procedures, because one can select good space-filling designs within the class of OrthogonalLatinhypercubes according to selection criteria.
Abstract: Latin hypercubes have been frequently used in conducting computer experiments. In this paper, a class of orthogonal Latin hypercubes that preserves orthogonality among columns is proposed. Applying an orthogonal Latin hypercube design to a computer experiment benefits the data analysis in two ways. First, it retains the orthogonality of traditional experimental designs. The estimates of linear effects of all factors are uncorrelated not only with each other, but also with the estimates of all quadratic effects and bilinear interactions. Second, it can facilitate nonparametric fitting procedures, because one can select good space-filling designs within the class of orthogonal Latin hypercubes according to selection criteria.

413 citations

Journal ArticleDOI
TL;DR: In this paper, a polynomial indicator function is used to generalize the aberration criterion of a regular two-level fractional factorial design to all 2-level factorial designs and an important identity of generalized aberration is proved.
Abstract: A two-level factorial design can be uniquely represented by a polynomial indicator function. Therefore, properties of factorial designs can be studied through their indicator functions. This paper shows that the indicator function is an effective tool in studying two-level factorial designs. The indicator function is used to generalize the aberration criterion of a regular two-level fractional factorial design to all two-level factorial designs. An important identity of generalized aberration is proved. The connection between a uniformity measure and aberration is also extended to all two-level factorial designs.

105 citations

Journal ArticleDOI
TL;DR: The data suggest that trabecular bone responds to low level mechanical loads with intricate adaptations beyond a simple reduction in apparent strain magnitude, producing a structure that is stiffer and less prone to fracture for a given load.
Abstract: Extremely low magnitude mechanical stimuli ~,10 microstrain! induced at high frequencies are anabolic to trabe- cular bone. Here, we used finite element ~FE! modeling to investigate the mechanical implications of a one year mechani- cal intervention. Adult female sheep stood with their hindlimbs either on a vibrating plate ~30 Hz, 0.3 g! for 20 min/d, 5 d/wk or on an inactive plate. Microcomputed tomography data of 1 cm bone cubes extracted from the medial femoral condyles were transformed into FE meshes. Simulated compressive loads applied to the trabecular meshes in the three orthogonal direc- tions indicated that the low level mechanical intervention sig- nificantly increased the apparent trabecular tissue stiffness of the femoral condyle in the longitudinal ~117%, p,0.02), anterior-posterior ~129%, p,0.01), and medial-lateral ~137%, p,0.01) direction, thus reducing apparent strain mag- nitudes for a given applied load. For a given apparent input strain ~or stress!, the resultant stresses and strains within trabe- culae were more uniformly distributed in the off-axis loading directions in cubes of mechanically loaded sheep. These data suggest that trabecular bone responds to low level mechanical loads with intricate adaptations beyond a simple reduction in apparent strain magnitude, producing a structure that is stiffer and less prone to fracture for a given load. © 2003 Biomedi- cal Engineering Society. @DOI: 10.1114/1.1535414#

103 citations


Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
TL;DR: A technical review of template preparation, sequencing and imaging, genome alignment and assembly approaches, and recent advances in current and near-term commercially available NGS instruments is presented.
Abstract: Demand has never been greater for revolutionary technologies that deliver fast, inexpensive and accurate genome information. This challenge has catalysed the development of next-generation sequencing (NGS) technologies. The inexpensive production of large volumes of sequence data is the primary advantage over conventional methods. Here, I present a technical review of template preparation, sequencing and imaging, genome alignment and assembly approaches, and recent advances in current and near-term commercially available NGS instruments. I also outline the broad range of applications for NGS technologies, in addition to providing guidelines for platform selection to address biological questions of interest.

7,023 citations

Journal ArticleDOI
TL;DR: This paper presents a meta-modelling framework for estimating Output from Computer Experiments-Predicting Output from Training Data and Criteria Based Designs for computer Experiments.
Abstract: Many scientific phenomena are now investigated by complex computer models or codes A computer experiment is a number of runs of the code with various inputs A feature of many computer experiments is that the output is deterministic--rerunning the code with the same inputs gives identical observations Often, the codes are computationally expensive to run, and a common objective of an experiment is to fit a cheaper predictor of the output to the data Our approach is to model the deterministic output as the realization of a stochastic process, thereby providing a statistical basis for designing experiments (choosing the inputs) for efficient prediction With this model, estimates of uncertainty of predictions are also available Recent work in this area is reviewed, a number of applications are discussed, and we demonstrate our methodology with an example

6,583 citations

Journal ArticleDOI
John W. Belmont1, Andrew Boudreau, Suzanne M. Leal1, Paul Hardenbol  +229 moreInstitutions (40)
27 Oct 2005
TL;DR: A public database of common variation in the human genome: more than one million single nucleotide polymorphisms for which accurate and complete genotypes have been obtained in 269 DNA samples from four populations, including ten 500-kilobase regions in which essentially all information about common DNA variation has been extracted.
Abstract: Inherited genetic variation has a critical but as yet largely uncharacterized role in human disease. Here we report a public database of common variation in the human genome: more than one million single nucleotide polymorphisms (SNPs) for which accurate and complete genotypes have been obtained in 269 DNA samples from four populations, including ten 500-kilobase regions in which essentially all information about common DNA variation has been extracted. These data document the generality of recombination hotspots, a block-like structure of linkage disequilibrium and low haplotype diversity, leading to substantial correlations of SNPs with many of their neighbours. We show how the HapMap resource can guide the design and analysis of genetic association studies, shed light on structural variation and recombination, and identify loci that may have been subject to natural selection during human evolution.

5,479 citations

Journal ArticleDOI
23 Nov 2006-Nature
TL;DR: A first-generation CNV map of the human genome is constructed through the study of 270 individuals from four populations with ancestry in Europe, Africa or Asia, underscoring the importance of CNV in genetic diversity and evolution and the utility of this resource for genetic disease studies.
Abstract: Copy number variation (CNV) of DNA sequences is functionally significant but has yet to be fully ascertained. We have constructed a first-generation CNV map of the human genome through the study of 270 individuals from four populations with ancestry in Europe, Africa or Asia (the HapMap collection). DNA from these individuals was screened for CNV using two complementary technologies: single-nucleotide polymorphism (SNP) genotyping arrays, and clone-based comparative genomic hybridization. A total of 1,447 copy number variable regions (CNVRs), which can encompass overlapping or adjacent gains or losses, covering 360 megabases (12% of the genome) were identified in these populations. These CNVRs contained hundreds of genes, disease loci, functional elements and segmental duplications. Notably, the CNVRs encompassed more nucleotide content per genome than SNPs, underscoring the importance of CNV in genetic diversity and evolution. The data obtained delineate linkage disequilibrium patterns for many CNVs, and reveal marked variation in copy number among populations. We also demonstrate the utility of this resource for genetic disease studies.

4,275 citations