scispace - formally typeset
Browse all papers

Journal ArticleDOI: 10.1016/0003-2697(76)90527-3
Marion M. Bradford1Institutions (1)
Abstract: A protein determination method which involves the binding of Coomassie Brilliant Blue G-250 to protein is described. The binding of the dye to protein causes a shift in the absorption maximum of the dye from 465 to 595 nm, and it is the increase in absorption at 595 nm which is monitored. This assay is very reproducible and rapid with the dye binding process virtually complete in approximately 2 min with good color stability for 1 hr. There is little or no interference from cations such as sodium or potassium nor from carbohydrates such as sucrose. A small amount of color is developed in the presence of strongly alkaline buffering agents, but the assay may be run accurately by the use of proper buffer controls. The only components found to give excessive interfering color in the assay are relatively large amounts of detergents such as sodium dodecyl sulfate, Triton X-100, and commercial glassware detergents. Interference by small amounts of detergent may be eliminated by the use of proper controls. more

Topics: Bradford protein assay (59%), Lowry protein assay (55%), Spectrin binding (55%) more

Journal ArticleDOI: 10.1103/PHYSREVLETT.77.3865
Abstract: Generalized gradient approximations (GGA’s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. [S0031-9007(96)01479-2] PACS numbers: 71.15.Mb, 71.45.Gm Kohn-Sham density functional theory [1,2] is widely used for self-consistent-field electronic structure calculations of the ground-state properties of atoms, molecules, and solids. In this theory, only the exchange-correlation energy EXC › EX 1 EC as a functional of the electron spin densities n"srd and n#srd must be approximated. The most popular functionals have a form appropriate for slowly varying densities: the local spin density (LSD) approximation Z d 3 rn e unif more

Journal ArticleDOI: 10.1006/METH.2001.1262
01 Dec 2001-Methods
Abstract: The two most commonly used methods to analyze data from real-time, quantitative PCR experiments are absolute quantification and relative quantification. Absolute quantification determines the input copy number, usually by relating the PCR signal to a standard curve. Relative quantification relates the PCR signal of the target transcript in a treatment group to that of another sample such as an untreated control. The 2(-Delta Delta C(T)) method is a convenient way to analyze the relative changes in gene expression from real-time quantitative PCR experiments. The purpose of this report is to present the derivation, assumptions, and applications of the 2(-Delta Delta C(T)) method. In addition, we present the derivation and applications of two variations of the 2(-Delta Delta C(T)) method that may be useful in the analysis of real-time, quantitative PCR data. more

Topics: MicroRNA 34a (52%), Cell wall organization (50%)

Open accessProceedings ArticleDOI: 10.1109/CVPR.2016.90
Kaiming He1, Xiangyu Zhang1, Shaoqing Ren1, Jian Sun1Institutions (1)
27 Jun 2016-
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. more

Topics: Deep learning (53%), Residual (53%), Convolutional neural network (53%) more

93,356 Citations

Journal ArticleDOI: 10.1016/S0022-2836(05)80360-2
Stephen F. Altschul1, Warren Gish1, Webb Miller2, Eugene W. Myers3  +1 moreInstitutions (3)
Abstract: A new approach to rapid sequence comparison, basic local alignment search tool (BLAST), directly approximates alignments that optimize a measure of local similarity, the maximal segment pair (MSP) score. Recent mathematical results on the stochastic properties of MSP scores allow an analysis of the performance of this method as well as the statistical significance of alignments it generates. The basic algorithm is simple and robust; it can be implemented in a number of ways and applied in a variety of contexts including straight-forward DNA and protein sequence database searches, motif searches, gene identification searches, and in the analysis of multiple regions of similarity in long DNA sequences. In addition to its flexibility and tractability to mathematical analysis, BLAST is an order of magnitude faster than existing sequence comparison tools of comparable sensitivity. more

Topics: Substitution matrix (59%), Sim4 (58%), Alignment-free sequence analysis (57%) more

Journal ArticleDOI: 10.1063/1.464913
Abstract: Despite the remarkable thermochemical accuracy of Kohn–Sham density‐functional theories with gradient corrections for exchange‐correlation [see, for example, A. D. Becke, J. Chem. Phys. 96, 2155 (1992)], we believe that further improvements are unlikely unless exact‐exchange information is considered. Arguments to support this view are presented, and a semiempirical exchange‐correlation functional containing local‐spin‐density, gradient, and exact‐exchange terms is tested on 56 atomization energies, 42 ionization potentials, 8 proton affinities, and 10 total atomic energies of first‐ and second‐row systems. This functional performs significantly better than previous functionals with gradient corrections only, and fits experimental atomization energies with an impressively small average absolute deviation of 2.4 kcal/mol. more

80,847 Citations

Open accessProceedings Article
Diederik P. Kingma1, Jimmy Ba2Institutions (2)
01 Jan 2015-
Abstract: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm. more

Topics: Stochastic optimization (63%), Convex optimization (54%), Rate of convergence (52%) more

78,539 Citations

Open accessJournal ArticleDOI: 10.1107/S0108767307043930
George M. Sheldrick1Institutions (1)
Abstract: An account is given of the development of the SHELX system of computer programs from SHELX-76 to the present day. In addition to identifying useful innovations that have come into general use through their implementation in SHELX, a critical analysis is presented of the less-successful features, missed opportunities and desirable improvements for future releases of the software. An attempt is made to understand how a program originally designed for photographic intensity data, punched cards and computers over 10000 times slower than an average modern personal computer has managed to survive for so long. SHELXL is the most widely used program for small-molecule refinement and SHELXS and SHELXD are often employed for structure solution despite the availability of objectively superior programs. SHELXL also finds a niche for the refinement of macromolecules against high-resolution or twinned data; SHELXPRO acts as an interface for macromolecular applications. SHELXC, SHELXD and SHELXE are proving useful for the experimental phasing of macromolecules, especially because they are fast and robust and so are often employed in pipelines for high-throughput phasing. This paper could serve as a general literature citation when one or more of the open-source SHELX programs (and the Bruker AXS version SHELXTL) are employed in the course of a crystal-structure determination. more

Topics: Personal computer (55%), Literature citation (53%)

Open accessProceedings Article
03 Dec 2012-
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. more

Topics: Convolutional neural network (61%), Deep learning (59%), Dropout (neural networks) (54%) more

73,871 Citations

Open accessJournal ArticleDOI: 10.1037/0022-3514.51.6.1173
Abstract: In this article, we attempt to distinguish between the properties of moderator and mediator variables at a number of levels. First, we seek to make theorists and researchers aware of the importance of not using the terms moderator and mediator interchangeably by carefully elaborating, both conceptually and strategically, the many ways in which moderators and mediators differ. We then go beyond this largely pedagogical function and delineate the conceptual and strategic implications of making use of such distinctions with regard to a wide range of phenomena, including control and stress, attitudes, and personality traits. We also provide a specific compendium of analytic procedures appropriate for making the most effective use of the moderator and mediator distinction, both separately and in terms of a broader causal system that includes both moderators and mediators. more

Topics: Moderation (56%), Sobel test (51%), Moderated mediation (51%)

Journal ArticleDOI: 10.1111/J.2517-6161.1995.TB02031.X
Abstract: SUMMARY The common approach to the multiplicity problem calls for controlling the familywise error rate (FWER). This approach, though, has faults, and we point out a few. A different approach to problems of multiple significance testing is presented. It calls for controlling the expected proportion of falsely rejected hypotheses -the false discovery rate. This error rate is equivalent to the FWER when all hypotheses are true but is smaller otherwise. Therefore, in problems where the control of the false discovery rate rather than that of the FWER is desired, there is potential for a gain in power. A simple sequential Bonferronitype procedure is proved to control the false discovery rate for independent test statistics, and a simulation study shows that the gain in power is substantial. The use of the new procedure and the appropriateness of the criterion are illustrated with examples. more

Topics: False discovery rate (72%), Per-comparison error rate (66%), False coverage rate (63%) more

Open accessJournal ArticleDOI: 10.1093/NAR/25.17.3389
Abstract: The BLAST programs are widely used tools for searching protein and DNA databases for sequence similarities. For protein comparisons, a variety of definitional, algorithmic and statistical refinements described here permits the execution time of the BLAST programs to be decreased substantially while enhancing their sensitivity to weak similarities. A new criterion for triggering the extension of word hits, combined with a new heuristic for generating gapped alignments, yields a gapped BLAST program that runs at approximately three times the speed of the original. In addition, a method is introduced for automatically combining statistically significant alignments produced by BLAST into a position-specific score matrix, and searching the database using this matrix. The resulting Position-Specific Iterated BLAST (PSIBLAST) program runs at approximately the same speed per iteration as gapped BLAST, but in many cases is much more sensitive to weak but biologically relevant sequence similarities. PSI-BLAST is used to uncover several new and interesting members of the BRCT superfamily. more

Topics: Substitution matrix (57%), Sequence database (54%), Sequence profiling tool (53%) more

66,744 Citations

Open accessJournal ArticleDOI: 10.1073/PNAS.74.12.5463
Abstract: A new method for determining nucleotide sequences in DNA is described. It is similar to the “plus and minus” method [Sanger, F. & Coulson, A. R. (1975) J. Mol. Biol. 94, 441-448] but makes use of the 2′,3′-dideoxy and arabinonucleoside analogues of the normal deoxynucleoside triphosphates, which act as specific chain-terminating inhibitors of DNA polymerase. The technique has been applied to the DNA of bacteriophage ϕX174 and is more rapid and more accurate than either the plus or the minus method. more

Topics: Dideoxynucleotide (60%), DNA polymerase (59%), Maxam–Gilbert sequencing (58%) more

61,850 Citations

Open accessJournal ArticleDOI: 10.1093/NAR/22.22.4673
Abstract: The sensitivity of the commonly used progressive multiple sequence alignment method has been greatly improved for the alignment of divergent protein sequences. Firstly, individual weights are assigned to each sequence in a partial alignment in order to down-weight near-duplicate sequences and up-weight the most divergent ones. Secondly, amino acid substitution matrices are varied at different alignment stages according to the divergence of the sequences to be aligned. Thirdly, residue-specific gap penalties and locally reduced gap penalties in hydrophilic regions encourage new gaps in potential loop regions rather than regular secondary structure. Fourthly, positions in early alignments where gaps have been opened receive locally reduced gap penalties to encourage the opening up of new gaps at these positions. These modifications are incorporated into a new program, CLUSTAL W which is freely available. more

Topics: Gap penalty (62%), Multiple sequence alignment (61%), Structural alignment (58%) more

Journal ArticleDOI: 10.2307/2529310
J. R. Landis1, Gary G. KochInstitutions (1)
01 Mar 1977-Biometrics
Abstract: This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature. more

Topics: Categorical variable (58%), Fleiss' kappa (54%), Intra-rater reliability (51%) more

56,227 Citations

Journal ArticleDOI: 10.1016/0749-5978(91)90020-T
Icek Ajzen1Institutions (1)
Abstract: Research dealing with various aspects of* the theory of planned behavior (Ajzen, 1985, 1987) is reviewed, and some unresolved issues are discussed. In broad terms, the theory is found to be well supported by empirical evidence. Intentions to perform behaviors of different kinds can be predicted with high accuracy from attitudes toward the behavior, subjective norms, and perceived behavioral control; and these intentions, together with perceptions of behavioral control, account for considerable variance in actual behavior. Attitudes, subjective norms, and perceived behavioral control are shown to be related to appropriate sets of salient behavioral, normative, and control beliefs about the behavior, but the exact nature of these relations is still uncertain. Expectancy— value formulations are found to be only partly successful in dealing with these relations. Optimal rescaling of expectancy and value measures is offered as a means of dealing with measurement limitations. Finally, inclusion of past behavior in the prediction equation is shown to provide a means of testing the theory*s sufficiency, another issue that remains unresolved. The limited available evidence concerning this question shows that the theory is predicting behavior quite well in comparison to the ceiling imposed by behavioral reliability. © 1991 Academic Press. Inc. more

Topics: Theory of planned behavior (64%), Reasoned action approach (62%), Expectancy theory (60%) more

Open accessJournal ArticleDOI: 10.1093/OXFORDJOURNALS.MOLBEV.A040454
Naruya Saitou1, Masatoshi NeiInstitutions (1)
Abstract: A new method called the neighbor-joining method is proposed for reconstructing phylogenetic trees from evolutionary distance data. The principle of this method is to find pairs of operational taxonomic units (OTUs [= neighbors]) that minimize the total branch length at each stage of clustering of OTUs starting with a starlike tree. The branch lengths as well as the topology of a parsimonious tree can quickly be obtained by using this method. Using computer simulation, we studied the efficiency of this method in obtaining the correct unrooted tree in comparison with that of five other tree-making methods: the unweighted pair group method of analysis, Farris's method, Sattath and Tversky's method, Li's method, and Tateno et al.'s modified Farris method. The new, neighbor-joining method and Sattath and Tversky's method are shown to be generally better than the other methods. more

Topics: Tree rearrangement (55%), Split (55%), Computational phylogenetics (54%) more

Open accessJournal ArticleDOI: 10.1371/JOURNAL.PMED.1000097
David Moher1, David Moher2, Alessandro Liberati3, Jennifer Tetzlaff2  +1 moreInstitutions (4)
21 Jul 2009-PLOS Medicine
Abstract: David Moher and colleagues introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses more

Topics: Systematic review (53%)

53,418 Citations

Open accessBook
01 Jan 1983-
Abstract: In this Section: 1. Brief Table of Contents 2. Full Table of Contents 1. BRIEF TABLE OF CONTENTS Chapter 1 Introduction Chapter 2 A Guide to Statistical Techniques: Using the Book Chapter 3 Review of Univariate and Bivariate Statistics Chapter 4 Cleaning Up Your Act: Screening Data Prior to Analysis Chapter 5 Multiple Regression Chapter 6 Analysis of Covariance Chapter 7 Multivariate Analysis of Variance and Covariance Chapter 8 Profile Analysis: The Multivariate Approach to Repeated Measures Chapter 9 Discriminant Analysis Chapter 10 Logistic Regression Chapter 11 Survival/Failure Analysis Chapter 12 Canonical Correlation Chapter 13 Principal Components and Factor Analysis Chapter 14 Structural Equation Modeling Chapter 15 Multilevel Linear Modeling Chapter 16 Multiway Frequency Analysis 2. FULL TABLE OF CONTENTS Chapter 1: Introduction Multivariate Statistics: Why? Some Useful Definitions Linear Combinations of Variables Number and Nature of Variables to Include Statistical Power Data Appropriate for Multivariate Statistics Organization of the Book Chapter 2: A Guide to Statistical Techniques: Using the Book Research Questions and Associated Techniques Some Further Comparisons A Decision Tree Technique Chapters Preliminary Check of the Data Chapter 3: Review of Univariate and Bivariate Statistics Hypothesis Testing Analysis of Variance Parameter Estimation Effect Size Bivariate Statistics: Correlation and Regression. Chi-Square Analysis Chapter 4: Cleaning Up Your Act: Screening Data Prior to Analysis Important Issues in Data Screening Complete Examples of Data Screening Chapter 5: Multiple Regression General Purpose and Description Kinds of Research Questions Limitations to Regression Analyses Fundamental Equations for Multiple Regression Major Types of Multiple Regression Some Important Issues. Complete Examples of Regression Analysis Comparison of Programs Chapter 6: Analysis of Covariance General Purpose and Description Kinds of Research Questions Limitations to Analysis of Covariance Fundamental Equations for Analysis of Covariance Some Important Issues Complete Example of Analysis of Covariance Comparison of Programs Chapter 7: Multivariate Analysis of Variance and Covariance General Purpose and Description Kinds of Research Questions Limitations to Multivariate Analysis of Variance and Covariance Fundamental Equations for Multivariate Analysis of Variance and Covariance Some Important Issues Complete Examples of Multivariate Analysis of Variance and Covariance Comparison of Programs Chapter 8: Profile Analysis: The Multivariate Approach to Repeated Measures General Purpose and Description Kinds of Research Questions Limitations to Profile Analysis Fundamental Equations for Profile Analysis Some Important Issues Complete Examples of Profile Analysis Comparison of Programs Chapter 9: Discriminant Analysis General Purpose and Description Kinds of Research Questions Limitations to Discriminant Analysis Fundamental Equations for Discriminant Analysis Types of Discriminant Analysis Some Important Issues Comparison of Programs Chapter 10: Logistic Regression General Purpose and Description Kinds of Research Questions Limitations to Logistic Regression Analysis Fundamental Equations for Logistic Regression Types of Logistic Regression Some Important Issues Complete Examples of Logistic Regression Comparison of Programs Chapter 11: Survival/Failure Analysis General Purpose and Description Kinds of Research Questions Limitations to Survival Analysis Fundamental Equations for Survival Analysis Types of Survival Analysis Some Important Issues Complete Example of Survival Analysis Comparison of Programs Chapter 12: Canonical Correlation General Purpose and Description Kinds of Research Questions Limitations Fundamental Equations for Canonical Correlation Some Important Issues Complete Example of Canonical Correlation Comparison of Programs Chapter 13: Principal Components and Factor Analysis General Purpose and Description Kinds of Research Questions Limitations Fundamental Equations for Factor Analysis Major Types of Factor Analysis Some Important Issues Complete Example of FA Comparison of Programs Chapter 14: Structural Equation Modeling General Purpose and Description Kinds of Research Questions Limitations to Structural Equation Modeling Fundamental Equations for Structural Equations Modeling Some Important Issues Complete Examples of Structural Equation Modeling Analysis. Comparison of Programs Chapter 15: Multilevel Linear Modeling General Purpose and Description Kinds of Research Questions Limitations to Multilevel Linear Modeling Fundamental Equations Types of MLM Some Important Issues Complete Example of MLM Comparison of Programs Chapter 16: Multiway Frequency Analysis General Purpose and Description Kinds of Research Questions Limitations to Multiway Frequency Analysis Fundamental Equations for Multiway Frequency Analysis Some Important Issues Complete Example of Multiway Frequency Analysis Comparison of Programs more

Topics: Path analysis (statistics) (56%), Univariate (55%), Multivariate statistics (53%) more

Open accessBook
01 Sep 1988-
Abstract: From the Publisher: This book brings together - in an informal and tutorial fashion - the computer techniques, mathematical tools, and research results that will enable both students and practitioners to apply genetic algorithms to problems in many fields Major concepts are illustrated with running examples, and major algorithms are illustrated by Pascal computer programs No prior knowledge of GAs or genetics is assumed, and only a minimum of computer programming and mathematics background is required more

Topics: Genetic representation (69%), Genetic programming (67%), Pascal (programming language) (62%) more

Open accessJournal ArticleDOI: 10.1073/PNAS.76.9.4350
Abstract: A method has been devised for the electrophoretic transfer of proteins from polyacrylamide gels to nitrocellulose sheets. The method results in quantitative transfer of ribosomal proteins from gels containing urea. For sodium dodecyl sulfate gels, the original band pattern was obtained with no loss of resolution, but the transfer was not quantitative. The method allows detection of proteins by autoradiography and is simpler than conventional procedures. The immobilized proteins were detectable by immunological procedures. All additional binding capacity on the nitrocellulose was blocked with excess protein; then a specific antibody was bound and, finally, a second antibody directed against the first antibody. The second antibody was either radioactively labeled or conjugated to fluorescein or to peroxidase. The specific protein was then detected by either autoradiography, under UV light, or by the peroxidase reaction product, respectively. In the latter case, as little as 100 pg of protein was clearly detectable. It is anticipated that the procedure will be applicable to analysis of a wide variety of proteins with specific reactions or ligands. more

Topics: Electroblotting (61%), Southwestern blot (56%), Perinuclear theca (53%) more

Open accessJournal ArticleDOI: 10.3322/CAAC.20107
Ahmedin Jemal1, Freddie Bray2, Jacques Ferlay2, Elizabeth Ward1  +1 moreInstitutions (2)
Abstract: The global burden of cancer continues to increase largely because of the aging and growth of the world population alongside an increasing adoption of cancer-causing behaviors, particularly smoking, in economically developing countries. Based on the GLOBOCAN 2008 estimates, about 12.7 million cancer cases and 7.6 million cancer deaths are estimated to have occurred in 2008; of these, 56% of the cases and 64% of the deaths occurred in the economically developing world. Breast cancer is the most frequently diagnosed cancer and the leading cause of cancer death among females, accounting for 23% of the total cancer cases and 14% of the cancer deaths. Lung cancer is the leading cancer site in males, comprising 17% of the total new cancer cases and 23% of the total cancer deaths. Breast cancer is now also the leading cause of cancer death among females in economically developing countries, a shift from the previous decade during which the most common cause of cancer death was cervical cancer. Further, the mortality burden for lung cancer among females in developing countries is as high as the burden for cervical cancer, with each accounting for 11% of the total female cancer deaths. Although overall cancer incidence rates in the developing world are half those seen in the developed world in both sexes, the overall cancer mortality rates are generally similar. Cancer survival tends to be poorer in developing countries, most likely because of a combination of a late stage at diagnosis and limited access to timely and standard treatment. A substantial proportion of the worldwide burden of cancer could be prevented through the application of existing cancer control knowledge and by implementing programs for tobacco control, vaccination (for liver and cervical cancers), and early detection and treatment, as well as public health campaigns promoting physical activity and a healthier dietary intake. Clinicians, public health professionals, and policy makers can play an active role in accelerating the application of such interventions globally. more

Topics: Epidemiology of cancer (69%), Cancer (67%), Preventive healthcare (63%) more

Book ChapterDOI: 10.1007/978-1-4612-4380-9_25
Edward L. Kaplan1, Paul Meier2Institutions (2)
Abstract: In lifetesting, medical follow-up, and other fields the observation of the time of occurrence of the event of interest (called a death) may be prevented for some of the items of the sample by the previous occurrence of some other event (called a loss). Losses may be either accidental or controlled, the latter resulting from a decision to terminate certain observations. In either case it is usually assumed in this paper that the lifetime (age at death) is independent of the potential loss time; in practice this assumption deserves careful scrutiny. Despite the resulting incompleteness of the data, it is desired to estimate the proportion P(t) of items in the population whose lifetimes would exceed t (in the absence of such losses), without making any assumption about the form of the function P(t). The observation for each item of a suitable initial event, marking the beginning of its lifetime, is presupposed. For random samples of size N the product-limit (PL) estimate can be defined as follows: L... more

Topics: Kaplan-Meier Estimate (54%), Population (52%)

Open accessProceedings Article
Karen Simonyan1, Andrew Zisserman1Institutions (1)
01 Jan 2015-
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision. more

49,857 Citations

Open accessJournal ArticleDOI: 10.1126/SCIENCE.1102896
Kostya S. Novoselov1, Andre K. Geim1, Sergey V. Morozov, Da Jiang1  +4 moreInstitutions (1)
22 Oct 2004-Science
Abstract: We describe monocrystalline graphitic films, which are a few atoms thick but are nonetheless stable under ambient conditions, metallic, and of remarkably high quality. The films are found to be a two-dimensional semimetal with a tiny overlap between valence and conductance bands, and they exhibit a strong ambipolar electric field effect such that electrons and holes in concentrations up to 10 13 per square centimeter and with room-temperature mobilities of ∼10,000 square centimeters per volt-second can be induced by applying gate voltage. more

Topics: Carbon film (57%), Hall effect (55%), Ambipolar diffusion (54%) more

Journal ArticleDOI: 10.1016/0304-405X(76)90026-X
Abstract: In this paper we draw on recent progress in the theory of (1) property rights, (2) agency, and (3) finance to develop a theory of ownership structure for the firm.1 In addition to tying together elements of the theory of each of these three areas, our analysis casts new light on and has implications for a variety of issues in the professional and popular literature, such as the definition of the firm, the “separation of ownership and control,” the “social responsibility” of business, the definition of a “corporate objective function,” the determination of an optimal capital structure, the specification of the content of credit agreements, the theory of organizations, and the supply side of the completeness-of-markets problem. more

Topics: State ownership (62%), Capital structure (59%), Trade-off theory of capital structure (59%) more

45,832 Citations

Open accessBook
Thomas M. Cover1, Joy A. Thomas2Institutions (2)
01 Jan 1991-
Abstract: Preface to the Second Edition. Preface to the First Edition. Acknowledgments for the Second Edition. Acknowledgments for the First Edition. 1. Introduction and Preview. 1.1 Preview of the Book. 2. Entropy, Relative Entropy, and Mutual Information. 2.1 Entropy. 2.2 Joint Entropy and Conditional Entropy. 2.3 Relative Entropy and Mutual Information. 2.4 Relationship Between Entropy and Mutual Information. 2.5 Chain Rules for Entropy, Relative Entropy, and Mutual Information. 2.6 Jensen's Inequality and Its Consequences. 2.7 Log Sum Inequality and Its Applications. 2.8 Data-Processing Inequality. 2.9 Sufficient Statistics. 2.10 Fano's Inequality. Summary. Problems. Historical Notes. 3. Asymptotic Equipartition Property. 3.1 Asymptotic Equipartition Property Theorem. 3.2 Consequences of the AEP: Data Compression. 3.3 High-Probability Sets and the Typical Set. Summary. Problems. Historical Notes. 4. Entropy Rates of a Stochastic Process. 4.1 Markov Chains. 4.2 Entropy Rate. 4.3 Example: Entropy Rate of a Random Walk on a Weighted Graph. 4.4 Second Law of Thermodynamics. 4.5 Functions of Markov Chains. Summary. Problems. Historical Notes. 5. Data Compression. 5.1 Examples of Codes. 5.2 Kraft Inequality. 5.3 Optimal Codes. 5.4 Bounds on the Optimal Code Length. 5.5 Kraft Inequality for Uniquely Decodable Codes. 5.6 Huffman Codes. 5.7 Some Comments on Huffman Codes. 5.8 Optimality of Huffman Codes. 5.9 Shannon-Fano-Elias Coding. 5.10 Competitive Optimality of the Shannon Code. 5.11 Generation of Discrete Distributions from Fair Coins. Summary. Problems. Historical Notes. 6. Gambling and Data Compression. 6.1 The Horse Race. 6.2 Gambling and Side Information. 6.3 Dependent Horse Races and Entropy Rate. 6.4 The Entropy of English. 6.5 Data Compression and Gambling. 6.6 Gambling Estimate of the Entropy of English. Summary. Problems. Historical Notes. 7. Channel Capacity. 7.1 Examples of Channel Capacity. 7.2 Symmetric Channels. 7.3 Properties of Channel Capacity. 7.4 Preview of the Channel Coding Theorem. 7.5 Definitions. 7.6 Jointly Typical Sequences. 7.7 Channel Coding Theorem. 7.8 Zero-Error Codes. 7.9 Fano's Inequality and the Converse to the Coding Theorem. 7.10 Equality in the Converse to the Channel Coding Theorem. 7.11 Hamming Codes. 7.12 Feedback Capacity. 7.13 Source-Channel Separation Theorem. Summary. Problems. Historical Notes. 8. Differential Entropy. 8.1 Definitions. 8.2 AEP for Continuous Random Variables. 8.3 Relation of Differential Entropy to Discrete Entropy. 8.4 Joint and Conditional Differential Entropy. 8.5 Relative Entropy and Mutual Information. 8.6 Properties of Differential Entropy, Relative Entropy, and Mutual Information. Summary. Problems. Historical Notes. 9. Gaussian Channel. 9.1 Gaussian Channel: Definitions. 9.2 Converse to the Coding Theorem for Gaussian Channels. 9.3 Bandlimited Channels. 9.4 Parallel Gaussian Channels. 9.5 Channels with Colored Gaussian Noise. 9.6 Gaussian Channels with Feedback. Summary. Problems. Historical Notes. 10. Rate Distortion Theory. 10.1 Quantization. 10.2 Definitions. 10.3 Calculation of the Rate Distortion Function. 10.4 Converse to the Rate Distortion Theorem. 10.5 Achievability of the Rate Distortion Function. 10.6 Strongly Typical Sequences and Rate Distortion. 10.7 Characterization of the Rate Distortion Function. 10.8 Computation of Channel Capacity and the Rate Distortion Function. Summary. Problems. Historical Notes. 11. Information Theory and Statistics. 11.1 Method of Types. 11.2 Law of Large Numbers. 11.3 Universal Source Coding. 11.4 Large Deviation Theory. 11.5 Examples of Sanov's Theorem. 11.6 Conditional Limit Theorem. 11.7 Hypothesis Testing. 11.8 Chernoff-Stein Lemma. 11.9 Chernoff Information. 11.10 Fisher Information and the Cram-er-Rao Inequality. Summary. Problems. Historical Notes. 12. Maximum Entropy. 12.1 Maximum Entropy Distributions. 12.2 Examples. 12.3 Anomalous Maximum Entropy Problem. 12.4 Spectrum Estimation. 12.5 Entropy Rates of a Gaussian Process. 12.6 Burg's Maximum Entropy Theorem. Summary. Problems. Historical Notes. 13. Universal Source Coding. 13.1 Universal Codes and Channel Capacity. 13.2 Universal Coding for Binary Sequences. 13.3 Arithmetic Coding. 13.4 Lempel-Ziv Coding. 13.5 Optimality of Lempel-Ziv Algorithms. Compression. Summary. Problems. Historical Notes. 14. Kolmogorov Complexity. 14.1 Models of Computation. 14.2 Kolmogorov Complexity: Definitions and Examples. 14.3 Kolmogorov Complexity and Entropy. 14.4 Kolmogorov Complexity of Integers. 14.5 Algorithmically Random and Incompressible Sequences. 14.6 Universal Probability. 14.7 Kolmogorov complexity. 14.9 Universal Gambling. 14.10 Occam's Razor. 14.11 Kolmogorov Complexity and Universal Probability. 14.12 Kolmogorov Sufficient Statistic. 14.13 Minimum Description Length Principle. Summary. Problems. Historical Notes. 15. Network Information Theory. 15.1 Gaussian Multiple-User Channels. 15.2 Jointly Typical Sequences. 15.3 Multiple-Access Channel. 15.4 Encoding of Correlated Sources. 15.5 Duality Between Slepian-Wolf Encoding and Multiple-Access Channels. 15.6 Broadcast Channel. 15.7 Relay Channel. 15.8 Source Coding with Side Information. 15.9 Rate Distortion with Side Information. 15.10 General Multiterminal Networks. Summary. Problems. Historical Notes. 16. Information Theory and Portfolio Theory. 16.1 The Stock Market: Some Definitions. 16.2 Kuhn-Tucker Characterization of the Log-Optimal Portfolio. 16.3 Asymptotic Optimality of the Log-Optimal Portfolio. 16.4 Side Information and the Growth Rate. 16.5 Investment in Stationary Markets. 16.6 Competitive Optimality of the Log-Optimal Portfolio. 16.7 Universal Portfolios. 16.8 Shannon-McMillan-Breiman Theorem (General AEP). Summary. Problems. Historical Notes. 17. Inequalities in Information Theory. 17.1 Basic Inequalities of Information Theory. 17.2 Differential Entropy. 17.3 Bounds on Entropy and Relative Entropy. 17.4 Inequalities for Types. 17.5 Combinatorial Bounds on Entropy. 17.6 Entropy Rates of Subsets. 17.7 Entropy and Fisher Information. 17.8 Entropy Power Inequality and Brunn-Minkowski Inequality. 17.9 Inequalities for Determinants. 17.10 Inequalities for Ratios of Determinants. Summary. Problems. Historical Notes. Bibliography. List of Symbols. Index. more

Topics: Shannon's source coding theorem (72%), Entropy rate (69%), Joint entropy (68%) more

Journal ArticleDOI: 10.1103/PHYSREVA.38.3098
Axel D. Becke1Institutions (1)
15 Sep 1988-Physical Review A
Abstract: Current gradient-corrected density-functional approximations for the exchange energies of atomic and molecular systems fail to reproduce the correct 1/r asymptotic behavior of the exchange-energy density. Here we report a gradient-corrected exchange-energy functional with the proper asymptotic limit. Our functional, containing only one parameter, fits the exact Hartree-Fock exchange energies of a wide variety of atomic systems with remarkable accuracy, surpassing the performance of previous functionals containing two parameters or more. more