scispace - formally typeset
Search or ask a question

Showing papers by "University of Illinois at Urbana–Champaign published in 2005"


Journal ArticleDOI
TL;DR: NAMD as discussed by the authors is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems that scales to hundreds of processors on high-end parallel platforms, as well as tens of processors in low-cost commodity clusters, and also runs on individual desktop and laptop computers.
Abstract: NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD scales to hundreds of processors on high-end parallel platforms, as well as tens of processors on low-cost commodity clusters, and also runs on individual desktop and laptop computers. NAMD works with AMBER and CHARMM potential functions, parameters, and file formats. This article, directed to novices as well as experts, first introduces concepts and methods used in the NAMD program, describing the classical molecular dynamics force field, equations of motion, and integration methods along with the efficient electrostatics evaluation algorithms employed and temperature and pressure controls used. Features for steering the simulation across barriers and for calculating both alchemical and conformational free energy differences are presented. The motivations for and a roadmap to the internal design of NAMD, implemented in C++ and based on Charm++ parallel objects, are outlined. The factors affecting the serial and parallel performance of a simulation are discussed. Finally, typical NAMD use is illustrated with representative applications to a small, a medium, and a large biomolecular system, highlighting particular features of NAMD, for example, the Tcl scripting language. The article also provides a list of the key features of NAMD and discusses the benefits of combining NAMD with the molecular graphics/sequence analysis software VMD and the grid computing/collaboratory software BioCoRE. NAMD is distributed free of charge with source code at www.ks.uiuc.edu.

14,558 citations


Journal ArticleDOI
22 Jul 2005-Science
TL;DR: Global croplands, pastures, plantations, and urban areas have expanded in recent decades, accompanied by large increases in energy, water, and fertilizer consumption, along with considerable losses of biodiversity.
Abstract: Land use has generally been considered a local environmental issue, but it is becoming a force of global importance. Worldwide changes to forests, farmlands, waterways, and air are being driven by the need to provide food, fiber, water, and shelter to more than six billion people. Global croplands, pastures, plantations, and urban areas have expanded in recent decades, accompanied by large increases in energy, water, and fertilizer consumption, along with considerable losses of biodiversity. Such changes in land use have enabled humans to appropriate an increasing share of the planet’s resources, but they also potentially undermine the capacity of ecosystems to sustain food production, maintain freshwater and forest resources, regulate climate and air quality, and ameliorate infectious diseases. We face the challenge of managing trade-offs between immediate human needs and maintaining the capacity of the biosphere to provide goods and services in the long term.

10,117 citations



Book
01 Jan 2005
TL;DR: In this paper, the authors propose a multiuser communication architecture for point-to-point wireless networks with additive Gaussian noise detection and estimation in the context of MIMO networks.
Abstract: 1. Introduction 2. The wireless channel 3. Point-to-point communication: detection, diversity and channel uncertainty 4. Cellular systems: multiple access and interference management 5. Capacity of wireless channels 6. Multiuser capacity and opportunistic communication 7. MIMO I: spatial multiplexing and channel modeling 8. MIMO II: capacity and multiplexing architectures 9. MIMO III: diversity-multiplexing tradeoff and universal space-time codes 10. MIMO IV: multiuser communication A. Detection and estimation in additive Gaussian noise B. Information theory background.

8,084 citations



Journal ArticleDOI
TL;DR: The results reveal that happiness is associated with and precedes numerous successful outcomes, as well as behaviors paralleling success, and the evidence suggests that positive affect may be the cause of many of the desirable characteristics, resources, and successes correlated with happiness.
Abstract: Numerous studies show that happy individuals are successful across multiple life domains, including marriage, friendship, income, work performance, and health. The authors suggest a conceptual model to account for these findings, arguing that the happiness-success link exists not only because success makes people happy, but also because positive affect engenders success. Three classes of evidence--crosssectional, longitudinal, and experimental--are documented to test their model. Relevant studies are described and their effect sizes combined meta-analytically. The results reveal that happiness is associated with and precedes numerous successful outcomes, as well as behaviors paralleling success. Furthermore, the evidence suggests that positive affect--the hallmark of well-being--may be the cause of many of the desirable characteristics, resources, and successes correlated with happiness. Limitations, empirical issues, and important future research questions are discussed.

5,713 citations


Journal ArticleDOI
TL;DR: This paper considers the requirements and implementation constraints on a framework that simultaneously enables an efficient discretization with associated hierarchical indexation and fast analysis/synthesis of functions defined on the sphere and demonstrates how these are explicitly satisfied by HEALPix.
Abstract: HEALPix the Hierarchical Equal Area isoLatitude Pixelization is a versatile structure for the pixelization of data on the sphere. An associated library of computational algorithms and visualization software supports fast scientific applications executable directly on discretized spherical maps generated from very large volumes of astronomical data. Originally developed to address the data processing and analysis needs of the present generation of cosmic microwave background experiments (e.g., BOOMERANG, WMAP), HEALPix can be expanded to meet many of the profound challenges that will arise in confrontation with the observational output of future missions and experiments, including, e.g., Planck, Herschel, SAFIR, and the Beyond Einstein inflation probe. In this paper we consider the requirements and implementation constraints on a framework that simultaneously enables an efficient discretization with associated hierarchical indexation and fast analysis/synthesis of functions defined on the sphere. We demonstrate how these are explicitly satisfied by HEALPix.

5,518 citations


01 Jan 2005

4,663 citations


Journal ArticleDOI
TL;DR: A "true" two-dimensional transform that can capture the intrinsic geometrical structure that is key in visual information is pursued and it is shown that with parabolic scaling and sufficient directional vanishing moments, contourlets achieve the optimal approximation rate for piecewise smooth functions with discontinuities along twice continuously differentiable curves.
Abstract: The limitations of commonly used separable extensions of one-dimensional transforms, such as the Fourier and wavelet transforms, in capturing the geometry of image edges are well known. In this paper, we pursue a "true" two-dimensional transform that can capture the intrinsic geometrical structure that is key in visual information. The main challenge in exploring geometry in images comes from the discrete nature of the data. Thus, unlike other approaches, such as curvelets, that first develop a transform in the continuous domain and then discretize for sampled data, our approach starts with a discrete-domain construction and then studies its convergence to an expansion in the continuous domain. Specifically, we construct a discrete-domain multiresolution and multidirection expansion using nonseparable filter banks, in much the same way that wavelets were derived from filter banks. This construction results in a flexible multiresolution, local, and directional image expansion using contour segments, and, thus, it is named the contourlet transform. The discrete contourlet transform has a fast iterated filter bank algorithm that requires an order N operations for N-pixel images. Furthermore, we establish a precise link between the developed filter bank and the associated continuous-domain contourlet expansion via a directional multiresolution analysis framework. We show that with parabolic scaling and sufficient directional vanishing moments, contourlets achieve the optimal approximation rate for piecewise smooth functions with discontinuities along twice continuously differentiable curves. Finally, we show some numerical experiments demonstrating the potential of contourlets in several image processing applications.

3,948 citations


Journal ArticleDOI
TL;DR: The results from this review may provide the most plausible estimates of how plants in their native environments and field-grown crops will respond to rising atmospheric [CO(2)]; but even with FACE there are limitations, which are discussed.
Abstract: Contents Summary 1 I. What is FACE? 2 II. Materials and methods 2 III. Photosynthetic carbon uptake 3 IV. Acclimation of photosynthesis 6 V. Growth, above-ground production and yield 8 VI. So, what have we learned? 10 Acknowledgements 11 References 11 Appendix 1. References included in the database for meta-analyses 14 Appendix 2. Results of the meta-analysis of FACE effects 18 Summary Free-air CO2 enrichment (FACE) experiments allow study of the effects of elevated [CO2] on plants and ecosystems grown under natural conditions without enclosure. Data from 120 primary, peer-reviewed articles describing physiology and production in the 12 large-scale FACE experiments (475–600 ppm) were collected and summarized using meta-analytic techniques. The results confirm some results from previous chamber experiments: light-saturated carbon uptake, diurnal C assimilation, growth and above-ground production increased, while specific leaf area and stomatal conductance decreased in elevated [CO2]. There were differences in FACE. Trees were more responsive than herbaceous species to elevated [CO2]. Grain crop yields increased far less than anticipated from prior enclosure studies. The broad direction of change in photosynthesis and production in elevated [CO2] may be similar in FACE and enclosure studies, but there are major quantitative differences: trees were more responsive than other functional types; C4 species showed little response; and the reduction in plant nitrogen was small and largely accounted for by decreased Rubisco. The results from this review may provide the most plausible estimates of how plants in their native environments and field-grown crops will respond to rising atmospheric [CO2]; but even with FACE there are limitations, which are also discussed.

3,140 citations


Journal ArticleDOI
TL;DR: In this article, the defining features of single-subject research are presented, the con- tributions of single subject research for special education are reviewed, and a specific proposal is of- fered for using singlesubject research to document evidence-based practice.
Abstract: Slnglesubject research plays an important role in the development of evidence-based practice in special education. The defining features of single-subject research are presented, the con- tributions oj single-subject research for special education are reviewed, and a specific proposal is of- fered for using single-subject research to document evidence-based practice. This article allows readers to determine if a specific study is a credible example of single-subject research and if a spe- cific practice or procedure has been validated as "evidence-based" via single-subject research.

Journal ArticleDOI
K. Adcox1, S. S. Adler2, Serguei Afanasiev3, Christine Angela Aidala2  +550 moreInstitutions (48)
TL;DR: In this paper, the results of the PHENIX detector at the Relativistic Heavy Ion Collider (RHIC) were examined with an emphasis on implications for the formation of a new state of dense matter.

Journal ArticleDOI
TL;DR: This paper shows how to arrange physical lighting so that the acquired images of each object can be directly used as the basis vectors of a low-dimensional linear space and that this subspace is close to those acquired by the other methods.
Abstract: Previous work has demonstrated that the image variation of many objects (human faces in particular) under variable lighting can be effectively modeled by low-dimensional linear spaces, even when there are multiple light sources and shadowing. Basis images spanning this space are usually obtained in one of three ways: a large set of images of the object under different lighting conditions is acquired, and principal component analysis (PCA) is used to estimate a subspace. Alternatively, synthetic images are rendered from a 3D model (perhaps reconstructed from images) under point sources and, again, PCA is used to estimate a subspace. Finally, images rendered from a 3D model under diffuse lighting based on spherical harmonics are directly used as basis images. In this paper, we show how to arrange physical lighting so that the acquired images of each object can be directly used as the basis vectors of a low-dimensional linear space and that this subspace is close to those acquired by the other methods. More specifically, there exist configurations of k point light source directions, with k typically ranging from 5 to 9, such that, by taking k images of an object under these single sources, the resulting subspace is an effective representation for recognition under a wide range of lighting conditions. Since the subspace is generated directly from real images, potentially complex and/or brittle intermediate steps such as 3D reconstruction can be completely avoided; nor is it necessary to acquire large numbers of training images or to physically construct complex diffuse (harmonic) light fields. We validate the use of subspaces constructed in this fashion within the context of face recognition.

Journal ArticleDOI
TL;DR: In this article, the authors measure the effect of Catholic high school attendance on educational attainment and test scores, and find that Catholic high schools substantially increase the probability of graduating from high school and, more tentatively, attending college.
Abstract: In this paper we measure the effect of Catholic high school attendance on educational attainment and test scores. Because we do not have a good instrumental variable for Catholic school attendance, we develop new estimation methods based on the idea that the amount of selection on the observed explanatory variables in a model provides a guide to the amount of selection on the unobservables. We also propose an informal way to assess selectivity bias based on measuring the ratio of selection on unobservables to selection on observables that would be required if one is to attribute the entire effect of Catholic school attendance to selection bias. We use our methods to estimate the effect of attending a Catholic high school on a variety of outcomes. Our main conclusion is that Catholic high schools substantially increase the probability of graduating from high school and, more tentatively, attending college. We find little evidence of an effect on test scores.

Journal ArticleDOI
TL;DR: With the growing understanding of polymer gene-delivery mechanisms and continued efforts of creative polymer chemists, it is likely that polymer-based gene-Delivery systems will become an important tool for human gene therapy.
Abstract: The lack of safe and efficient gene-delivery methods is a limiting obstacle to human gene therapy. Synthetic gene-delivery agents, although safer than recombinant viruses, generally do not possess the required efficacy. In recent years, a variety of effective polymers have been designed specifically for gene delivery, and much has been learned about their structure–function relationships. With the growing understanding of polymer gene-delivery mechanisms and continued efforts of creative polymer chemists, it is likely that polymer-based gene-delivery systems will become an important tool for human gene therapy.

Journal ArticleDOI
12 Jun 2005
TL;DR: DART is a new tool for automatically testing software that combines three main techniques, automated extraction of the interface of a program with its external environment using static source-code parsing, and dynamic analysis of how the program behaves under random testing and automatic generation of new test inputs to direct systematically the execution along alternative program paths.
Abstract: We present a new tool, named DART, for automatically testing software that combines three main techniques: (1) automated extraction of the interface of a program with its external environment using static source-code parsing; (2) automatic generation of a test driver for this interface that performs random testing to simulate the most general environment the program can operate in; and (3) dynamic analysis of how the program behaves under random testing and automatic generation of new test inputs to direct systematically the execution along alternative program paths. Together, these three techniques constitute Directed Automated Random Testing, or DART for short. The main strength of DART is thus that testing can be performed completely automatically on any program that compiles -- there is no need to write any test driver or harness code. During testing, DART detects standard errors such as program crashes, assertion violations, and non-termination. Preliminary experiments to unit test several examples of C programs are very encouraging.

Journal ArticleDOI
TL;DR: The subsystem approach is described, the first release of the growing library of populated subsystems is offered, and the SEED is the first annotation environment that supports this model of annotation.
Abstract: The release of the 1000th complete microbial genome will occur in the next two to three years. In anticipation of this milestone, the Fellowship for Interpretation of Genomes (FIG) launched the Project to Annotate 1000 Genomes. The project is built around the principle that the key to improved accuracy in high-throughput annotation technology is to have experts annotate single subsystems over the complete collection of genomes, rather than having an annotation expert attempt to annotate all of the genes in a single genome. Using the subsystems approach, all of the genes implementing the subsystem are analyzed by an expert in that subsystem. An annotation environment was created where populated subsystems are curated and projected to new genomes. A portable notion of a populated subsystem was defined, and tools developed for exchanging and curating these objects. Tools were also developed to resolve conflicts between populated subsystems. The SEED is the first annotation environment that supports this model of annotation. Here, we describe the subsystem approach, and offer the first release of our growing library of populated subsystems. The initial release of data includes 180 177 distinct proteins with 2133 distinct functional roles. This data comes from 173 subsystems and 383 different organisms.

Proceedings ArticleDOI
01 Sep 2005
TL;DR: In this paper, the authors address the problem of automating unit testing with memory graphs as inputs, and develop a method to represent and track constraints that capture the behavior of a symbolic execution of a unit with memory graph as inputs.
Abstract: In unit testing, a program is decomposed into units which are collections of functions. A part of unit can be tested by generating inputs for a single entry function. The entry function may contain pointer arguments, in which case the inputs to the unit are memory graphs. The paper addresses the problem of automating unit testing with memory graphs as inputs. The approach used builds on previous work combining symbolic and concrete execution, and more specifically, using such a combination to generate test inputs to explore all feasible execution paths. The current work develops a method to represent and track constraints that capture the behavior of a symbolic execution of a unit with memory graphs as inputs. Moreover, an efficient constraint solver is proposed to facilitate incremental generation of such test inputs. Finally, CUTE, a tool implementing the method is described together with the results of applying CUTE to real-world examples of C code.

Proceedings Article
05 Dec 2005
TL;DR: This paper proposes a "filter" method for feature selection which is independent of any learning algorithm, based on the observation that, in many real world classification problems, data from the same class are often close to each other.
Abstract: In supervised learning scenarios, feature selection has been studied widely in the literature. Selecting features in unsupervised learning scenarios is a much harder problem, due to the absence of class labels that would guide the search for relevant information. And, almost all of previous unsupervised feature selection methods are "wrapper" techniques that require a learning algorithm to evaluate the candidate feature subsets. In this paper, we propose a "filter" method for feature selection which is independent of any learning algorithm. Our method can be performed in either supervised or unsupervised fashion. The proposed method is based on the observation that, in many real world classification problems, data from the same class are often close to each other. The importance of a feature is evaluated by its power of locality preserving, or, Laplacian Score. We compare our method with data variance (unsupervised) and Fisher score (supervised) on two data sets. Experimental results demonstrate the effectiveness and efficiency of our algorithm.

Journal ArticleDOI
TL;DR: This review will concentrate on findings with P-450cam of the Pseudomonas putida camphor-5-exo-hydroxylase, and attention will be drawn to parallel and contrasting examples from other P- 450s as appropriate.
Abstract: Two decades have passed since the discovery in liver microsomes of a haemprotein that forms a reduced-CO complex with the absorptive maximum of the Soret at 450 nm (Klingenberg, 1958; Garfinkel, 1958) and the identification of this protein as a new cytochrome: pigment cytochrome, P-450 (Omura and Sato, 1962, 1964a). In the intervening years, the study of cytochrome P-450 dependent monoxygenases has expanded exponentially. From the first crude attempts to solubilise a P-450 (Omura and Sato, 1963, 1964b) to the determination of the primary, secondary, and tertiary structure of cytochrome P-450cam by amino acid sequencing (Haniu et al., 1982a,b) and x-ray crystallography (Poulos et al., 1984) our understanding of this unique family of proteins has been advancing on all fronts. Since, perhaps, the greatest understanding of the structure and mechanism of P-450s has come from concentrated study of P-450cam of the Pseudomonas putida camphor-5-exo-hydroxylase, this review will concentrate on findings with P-450cam; attention will be drawn to parallel and contrasting examples from other P-450s as appropriate.

Journal ArticleDOI
TL;DR: This revision of the classification of unicellular eukaryotes updates that of Levine et al. (1980) for the protozoa and expands it to include other protists, and proposes a scheme that is based on nameless ranked systematics.
Abstract: This revision of the classification of unicellular eukaryotes updates that of Levine et al. (1980) for the protozoa and expands it to include other protists. Whereas the previous revision was primarily to incorporate the results of ultrastructural studies, this revision incorporates results from both ultrastructural research since 1980 and molecular phylogenetic studies. We propose a scheme that is based on nameless ranked systematics. The vocabulary of the taxonomy is updated, particularly to clarify the naming of groups that have been repositioned. We recognize six clusters of eukaryotes that may represent the basic groupings similar to traditional ''kingdoms.'' The multicellular lineages emerged from within monophyletic protist lineages: animals and fungi from Opisthokonta, plants from Archaeplastida, and brown algae from Stramenopiles.

Journal Article
TL;DR: A decentralized density control algorithm, Optimal Geographical Density Control (OGDC), is devised for density control in large scale sensor networks and can maintain coverage as well as connectivity, regardless of the relationship between the radio range and the sensing range.
Abstract: In this paper, we address the issues of maintaining sensing coverage and connectivity by keeping a minimum number of sensor nodes in the active mode in wireless sensor networks. We investigate the relationship between coverage and connectivity by solving the following two sub-problems. First, we prove that if the radio range is at least twice the sensing range, complete coverage of a convex area implies connectivity among the working set of nodes. Second, we derive, under the ideal case in which node density is sufficiently high, a set of optimality conditions under which a subset of working sensor nodes can be chosen for complete coverage. Based on the optimality conditions, we then devise a decentralized density control algorithm, Optimal Geographical Density Control (OGDC), for density control in large scale sensor networks. The OGDC algorithm is fully localized and can maintain coverage as well as connectivity, regardless of the relationship between the radio range and the sensing range. Ns-2 simulations show that OGDC outperforms existing density control algorithms [25, 26, 29] with respect to the number of working nodes needed and network lifetime (with up to 50% improvement), and achieves almost the same coverage as the algorithm with the best result.

Proceedings ArticleDOI
17 Oct 2005
TL;DR: This paper proposes a novel subspace learning algorithm called neighborhood preserving embedding (NPE), which aims at preserving the local neighborhood structure on the data manifold and is less sensitive to outliers than principal component analysis (PCA).
Abstract: Recently there has been a lot of interest in geometrically motivated approaches to data analysis in high dimensional spaces. We consider the case where data is drawn from sampling a probability distribution that has support on or near a submanifold of Euclidean space. In this paper, we propose a novel subspace learning algorithm called neighborhood preserving embedding (NPE). Different from principal component analysis (PCA) which aims at preserving the global Euclidean structure, NPE aims at preserving the local neighborhood structure on the data manifold. Therefore, NPE is less sensitive to outliers than PCA. Also, comparing to the recently proposed manifold learning algorithms such as Isomap and locally linear embedding, NPE is defined everywhere, rather than only on the training data points. Furthermore, NPE may be conducted in the original space or in the reproducing kernel Hilbert space into which data points are mapped. This gives rise to kernel NPE. Several experiments on face database demonstrate the effectiveness of our algorithm

Journal ArticleDOI
08 Apr 2005-Science
TL;DR: A novel nonverbal task is used to examine 15-month-old infants' ability to predict an actor's behavior on the basis of her true or false belief about a toy's hiding place, supporting the view that, from a young age, children appeal to mental states—goals, perceptions, and beliefs—to explain the behavior of others.
Abstract: For more than two decades, researchers have argued that young children do not understand mental states such as beliefs. Part of the evidence for this claim comes from preschoolers' failure at verbal tasks that require the understanding that others may hold false beliefs. Here, we used a novel nonverbal task to examine 15-month-old infants' ability to predict an actor's behavior on the basis of her true or false belief about a toy's hiding place. Results were positive, supporting the view that, from a young age, children appeal to mental states--goals, perceptions, and beliefs--to explain the behavior of others.

Journal ArticleDOI
TL;DR: In this paper, a mechanistic model accounting for reduced structural reorganization and densification in the microstructure of geopolymer gels with high concentrations of soluble silicon in the activating solution has been proposed.

Journal ArticleDOI
TL;DR: The development of the method of particle image velocimetry (PIV) is traced by describing some of the milestones that have enabled new and/or better measurements to be made.
Abstract: The development of the method of particle image velocimetry (PIV) is traced by describing some of the milestones that have enabled new and/or better measurements to be made. The current status of PIV is summarized, and some goals for future advances are addressed.

Journal ArticleDOI
TL;DR: This paper facilitates the reliable use of nonlinear convex relaxations in global optimization via a polyhedral branch-and-cut approach and proves that, if the convexity of a univariate or multivariate function is apparent by decomposing it into convex subexpressions, the relaxation constructor automatically exploits this convexITY in a manner that is much superior to developing polyhedral outer approximators for the original function.
Abstract: A variety of nonlinear, including semidefinite, relaxations have been developed in recent years for nonconvex optimization problems. Their potential can be realized only if they can be solved with sufficient speed and reliability. Unfortunately, state-of-the-art nonlinear programming codes are significantly slower and numerically unstable compared to linear programming software.In this paper, we facilitate the reliable use of nonlinear convex relaxations in global optimization via a polyhedral branch-and-cut approach. Our algorithm exploits convexity, either identified automatically or supplied through a suitable modeling language construct, in order to generate polyhedral cutting planes and relaxations for multivariate nonconvex problems. We prove that, if the convexity of a univariate or multivariate function is apparent by decomposing it into convex subexpressions, our relaxation constructor automatically exploits this convexity in a manner that is much superior to developing polyhedral outer approximators for the original function. The convexity of functional expressions that are composed to form nonconvex expressions is also automatically exploited.Root-node relaxations are computed for 87 problems from globallib and minlplib, and detailed computational results are presented for globally solving 26 of these problems with BARON 7.2, which implements the proposed techniques. The use of cutting planes for these problems reduces root-node relaxation gaps by up to 100% and expedites the solution process, often by several orders of magnitude.

Journal ArticleDOI
TL;DR: This article found that households exhibit a strong preference for local investments and that the average household generates an additional annualized return of 3.2% from its local holdings relative to its non-local holdings, suggesting that local investors can exploit local knowledge.
Abstract: Using data on the investments a large number of individual investors made through a discount broker from 1991 to 1996, we find that households exhibit a strong preference for local investments. We test whether this locality bias stems from information or from simple familiarity. The average household generates an additional annualized return of 3.2% from its local holdings relative to its nonlocal holdings, suggesting that local investors can exploit local knowledge. Excess returns to investing locally are even larger among stocks not in the SP but the wise man saith, “Put all your eggs in one basket and—watch that basket.” Mark Twain, 1894 THE FINANCE LITERATURE HAS YIELDED a large number of in-depth studies concerning the investments managed by professional money managers, yet historically, relatively little has been known about the individual investors’ money management, in no small part because of the shortage of high-quality data available for academic research. This is despite the fact that United States individual investors have been holding around 50% of the stock market in direct stock

Journal ArticleDOI
TL;DR: The proposed texture representation is evaluated in retrieval and classification tasks using the entire Brodatz database and a publicly available collection of 1,000 photographs of textured surfaces taken from different viewpoints.
Abstract: This paper introduces a texture representation suitable for recognizing images of textured surfaces under a wide range of transformations, including viewpoint changes and nonrigid deformations. At the feature extraction stage, a sparse set of affine Harris and Laplacian regions is found in the image. Each of these regions can be thought of as a texture element having a characteristic elliptic shape and a distinctive appearance pattern. This pattern is captured in an affine-invariant fashion via a process of shape normalization followed by the computation of two novel descriptors, the spin image and the RIFT descriptor. When affine invariance is not required, the original elliptical shape serves as an additional discriminative feature for texture recognition. The proposed approach is evaluated in retrieval and classification tasks using the entire Brodatz database and a publicly available collection of 1,000 photographs of textured surfaces taken from different viewpoints.

Journal ArticleDOI
TL;DR: An overview of common methods of formulating R0 and surrogate threshold parameters from deterministic, non-structured models and the recent use of R0 in assessing emerging diseases, such as severe acute respiratory syndrome and avian influenza, a number of recent livestock diseases, and vector-borne diseases malaria, dengue and West Nile virus are surveyed.
Abstract: The basic reproductive ratio, R0, is defined as the expected number of secondary infections arising from a single individual during his or her entire infectious period, in a population of susceptibles. This concept is fundamental to the study of epidemiology and within-host pathogen dynamics. Most importantly, R0 often serves as a threshold parameter that predicts whether an infection will spread. Related parameters which share this threshold behaviour, however, may or may not give the true value of R0. In this paper we give a brief overview of common methods of formulating R0 and surrogate threshold parameters from deterministic, non-structured models. We also review common means of estimating R0 from epidemiological data. Finally, we survey the recent use of R0 in assessing emerging diseases, such as severe acute respiratory syndrome and avian influenza, a number of recent livestock diseases, and vector-borne diseases malaria, dengue and West Nile virus.