scispace - formally typeset
Search or ask a question

Showing papers by "Massachusetts Institute of Technology published in 2000"


Journal ArticleDOI
07 Jan 2000-Cell
TL;DR: This work has been supported by the Department of the Army and the National Institutes of Health, and the author acknowledges the support and encouragement of the National Cancer Institute.

28,811 citations


Book ChapterDOI
TL;DR: This chapter assumes acquaintance with the principles and practice of PCR, as outlined in, for example, refs.
Abstract: 1. Introduction Designing PCR and sequencing primers are essential activities for molecular biologists around the world. This chapter assumes acquaintance with the principles and practice of PCR, as outlined in, for example, refs. 1–4. Primer3 is a computer program that suggests PCR primers for a variety of applications, for example to create STSs (sequence tagged sites) for radiation hybrid mapping (5), or to amplify sequences for single nucleotide polymor-phism discovery (6). Primer3 can also select single primers for sequencing reactions and can design oligonucleotide hybridization probes. In selecting oligos for primers or hybridization probes, Primer3 can consider many factors. These include oligo melting temperature, length, GC content , 3′ stability, estimated secondary structure, the likelihood of annealing to or amplifying undesirable sequences (for example interspersed repeats), the likelihood of primer–dimer formation between two copies of the same primer, and the accuracy of the source sequence. In the design of primer pairs Primer3 can consider product size and melting temperature, the likelihood of primer– dimer formation between the two primers in the pair, the difference between primer melting temperatures, and primer location relative to particular regions of interest or to be avoided.

16,407 citations


Proceedings ArticleDOI
04 Jan 2000
TL;DR: The Low-Energy Adaptive Clustering Hierarchy (LEACH) as mentioned in this paper is a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network.
Abstract: Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.

12,497 citations


Proceedings Article
01 Jan 2000
TL;DR: Two different multiplicative algorithms for non-negative matrix factorization are analyzed and one algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence.
Abstract: Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the Expectation-Maximization algorithm. The algorithms can also be interpreted as diagonally rescaled gradient descent, where the rescaling factor is optimally chosen to ensure convergence.

7,345 citations


Journal ArticleDOI
24 Feb 2000-Nature
TL;DR: It is shown that let-7 is a heterochronic switch gene that encodes a temporally regulated 21-nucleotide RNA that is complementary to elements in the 3′ untranslated regions of the heteroch chronic genes lin-14, lin-28, Lin-41, lin -42 and daf-12, indicating that expression of these genes may be directly controlled by let- 7.
Abstract: The C. elegans heterochronic gene pathway consists of a cascade of regulatory genes that are temporally controlled to specify the timing of developmental events1. Mutations in heterochronic genes cause temporal transformations in cell fates in which stage-specific events are omitted or reiterated2. Here we show that let-7 is a heterochronic switch gene. Loss of let-7 gene activity causes reiteration of larval cell fates during the adult stage, whereas increased let-7 gene dosage causes precocious expression of adult fates during larval stages. let-7 encodes a temporally regulated 21-nucleotide RNA that is complementary to elements in the 3′ untranslated regions of the heterochronic genes lin-14, lin-28, lin-41, lin-42 and daf-12, indicating that expression of these genes may be directly controlled by let-7. A reporter gene bearing the lin-41 3′ untranslated region is temporally regulated in a let-7-dependent manner. A second regulatory RNA, lin-4, negatively regulates lin-14 and lin-28 through RNA–RNA interactions with their 3′ untranslated regions3,4. We propose that the sequential stage-specific expression of the lin-4 and let-7 regulatory RNAs triggers transitions in the complement of heterochronic regulatory proteins to coordinate developmental timing.

4,821 citations


Journal ArticleDOI
TL;DR: The major elements of MIT Lincoln Laboratory's Gaussian mixture model (GMM)-based speaker verification system used successfully in several NIST Speaker Recognition Evaluations (SREs) are described.

4,673 citations


Proceedings ArticleDOI
01 Aug 2000
TL;DR: The randomized algorithm used by beacons to transmit information, the use of concurrent radio and ultrasonic signals to infer distance, the listener inference algorithms to overcome multipath and interference, and practical beacon configuration and positioning techniques that improve accuracy are described.
Abstract: This paper presents the design, implementation, and evaluation of Cricket, a location-support system for in-building, mobile, location-dependent applications. It allows applications running on mobile and static nodes to learn their physical location by using listeners that hear and analyze information from beacons spread throughout the building. Cricket is the result of several design goals, including user privacy, decentralized administration, network heterogeneity, and low cost. Rather than explicitly tracking user location, Cricket helps devices learn where they are and lets them decide whom to advertise this information to; it does not rely on any centralized management or control and there is no explicit coordination between beacons; it provides information to devices regardless of their type of network connectivity; and each Cricket device is made from off-the-shelf components and costs less than U.S. $10. We describe the randomized algorithm used by beacons to transmit information, the use of concurrent radio and ultrasonic signals to infer distance, the listener inference algorithms to overcome multipath and interference, and practical beacon configuration and positioning techniques that improve accuracy. Our experience with Cricket shows that several location-dependent applications such as in-building active maps and device control can be developed with little effort or manual configuration.

4,123 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose an extension to the structurational perspective on technology that develops a practice lens to examine how people, as they interact with a technology in their ongoing practices, enact structures which shape their emergent and situated use of that technology.
Abstract: As both technologies and organizations undergo dramatic changes in form and function, organizational researchers are increasingly turning to concepts of innovation, emergence, and improvisation to help explain the new ways of organizing and using technology evident in practice. With a similar intent, I propose an extension to the structurational perspective on technology that develops a practice lens to examine how people, as they interact with a technology in their ongoing practices, enact structures which shape their emergent and situated use of that technology. Viewing the use of technology as a process of enactment enables a deeper understanding of the constitutive role of social practices in the ongoing use and change of technologies in the workplace. After developing this lens, I offer an example of its use in research, and then suggest some implications for the study of technology in organizations.

4,036 citations


Journal ArticleDOI
TL;DR: This paper focuses on motion tracking and shows how one can use observed motion to learn patterns of activity in a site and create a hierarchical binary-tree classification of the representations within a sequence.
Abstract: Our goal is to develop a visual monitoring system that passively observes moving objects in a site and learns patterns of activity from those observations. For extended sites, the system will require multiple cameras. Thus, key elements of the system are motion tracking, camera coordination, activity classification, and event detection. In this paper, we focus on motion tracking and show how one can use observed motion to learn patterns of activity in a site. Motion segmentation is based on an adaptive background subtraction method that models each pixel as a mixture of Gaussians and uses an online approximation to update the model. The Gaussian distributions are then evaluated to determine which are most likely to result from a background process. This yields a stable, real-time outdoor tracker that reliably deals with lighting changes, repetitive motions from clutter, and long-term scene changes. While a tracking system is unaware of the identity of any object it tracks, the identity remains the same for the entire tracking sequence. Our system leverages this information by accumulating joint co-occurrences of the representations within a sequence. These joint co-occurrence statistics are then used to create a hierarchical binary-tree classification of the representations. This method is useful for classifying sequences, as well as individual instances of activities in a site.

3,631 citations


Journal ArticleDOI
TL;DR: When considering new sensory technologies one should look to nature for guidance, as living organisms have developed the ultimate chemical sensors.
Abstract: When considering new sensory technologies one should look to nature for guidance. Indeed, living organisms have developed the ultimate chemical sensors. Many insects can detect chemical signals with perfect specificity and incredible sensitivity. Mammalian olfaction is based on an array of less discriminating sensors and a memorized response pattern to identify a unique odor. It is important to recognize that the extraordinary sensory performance of biological systems does not originate from a single element. In actuality, their performance is derived from a completely interactive system wherein the receptor is served by analyte delivery and removal mechanisms, selectivity is derived from receptors, and sensitivity is the result of analyte-triggered biochemical cascades. Clearly, optimal artificial sensory sys-

3,464 citations


Journal ArticleDOI
17 Feb 2000-Nature
TL;DR: The analysis of two SIR2 mutations supports the idea that this deacetylase activity accounts for silencing, recombination suppression and extension of life span in vivo, and provides a molecular framework of NAD-dependent histone de acetylation that connects metabolism, genomic silencing and ageing in yeast and, perhaps, in higher eukaryotes.
Abstract: Yeast Sir2 is a heterochromatin component that silences transcription at silent mating loci, telomeres and the ribosomal DNA, and that also suppresses recombination in the rDNA and extends replicative life span. Mutational studies indicate that lysine 16 in the amino-terminal tail of histone H4 and lysines 9, 14 and 18 in H3 are critically important in silencing, whereas lysines 5, 8 and 12 of H4 have more redundant functions. Lysines 9 and 14 of histone H3 and lysines 5, 8 and 16 of H4 are acetylated in active chromatin and hypoacetylated in silenced chromatin, and overexpression of Sir2 promotes global deacetylation of histones, indicating that Sir2 may be a histone deacetylase. Deacetylation of lysine 16 of H4 is necessary for binding the silencing protein, Sir3. Here we show that yeast and mouse Sir2 proteins are nicotinamide adenine dinucleotide (NAD)-dependent histone deacetylases, which deacetylate lysines 9 and 14 of H3 and specifically lysine 16 of H4. Our analysis of two SIR2 mutations supports the idea that this deacetylase activity accounts for silencing, recombination suppression and extension of life span in vivo. These findings provide a molecular framework of NAD-dependent histone deacetylation that connects metabolism, genomic silencing and ageing in yeast and, perhaps, in higher eukaryotes.

Journal ArticleDOI
01 May 2000
TL;DR: A model for types and levels of automation is outlined that can be applied to four broad classes of functions: 1) information acquisition; 2) information analysis; 3) decision and action selection; and 4) action implementation.
Abstract: We outline a model for types and levels of automation that provides a framework and an objective basis for deciding which system functions should be automated and to what extent. Appropriate selection is important because automation does not merely supplant but changes human activity and can impose new coordination demands on the human operator. We propose that automation can be applied to four broad classes of functions: 1) information acquisition; 2) information analysis; 3) decision and action selection; and 4) action implementation. Within each of these types, automation can be applied across a continuum of levels from low to high, i.e., from fully manual to fully automatic. A particular system can involve automation of all four types at different levels. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design using our model. Secondary evaluative criteria include automation reliability and the costs of decision/action consequences, among others. Examples of recommended types and levels of automation are provided to illustrate the application of the model to automation design.

Journal ArticleDOI
31 Mar 2000-Cell
TL;DR: It is found that RNAi is ATP dependent yet uncoupled from mRNA translation, suggesting that the 21-23 nucleotide fragments from the dsRNA are guiding mRNA cleavage.

Journal ArticleDOI
TL;DR: In this paper, the authors focus on the role of symbol processors in business performance and economic growth, arguing that most problems are not numerical problems and that the everyday activities of most managers, professionals, and information workers involve other types of computation.
Abstract: How do computers contribute to business performance and economic growth? Even today, most people who are asked to identify the strengths of computers tend to think of computational tasks like rapidly multiplying large numbers. Computers have excelled at computation since the Mark I (1939), the first modern computer, and the ENIAC (1943), the first electronic computer without moving parts. During World War II, the U.S. government generously funded research into tools for calculating the trajectories of artillery shells. The result was the development of some of the first digital computers with remarkable capabilities for calculation—the dawn of the computer age. However, computers are not fundamentally number crunchers. They are symbol processors. The same basic technologies can be used to store, retrieve, organize, transmit, and algorithmically transform any type of information that can be digitized—numbers, text, video, music, speech, programs, and engineering drawings, to name a few. This is fortunate because most problems are not numerical problems. Ballistics, code breaking, parts of accounting, and bits and pieces of other tasks involve lots of calculation. But the everyday activities of most managers, professionals, and information workers involve other types of

Journal ArticleDOI
TL;DR: On conventional PC hardware, the Click IP router achieves a maximum loss-free forwarding rate of 333,000 64-byte packets per second, demonstrating that Click's modular and flexible architecture is compatible with good performance.
Abstract: Clicks is a new software architecture for building flexible and configurable routers. A Click router is assembled from packet processing modules called elements. Individual elements implement simple router functions like packet classification, queuing, scheduling, and interfacing with network devices. A router configurable is a directed graph with elements at the vertices; packets flow along the edges of the graph. Several features make individual elements more powerful and complex configurations easier to write, including pull connections, which model packet flow drivn by transmitting hardware devices, and flow-based router context, which helps an element locate other interesting elements. Click configurations are modular and easy to extend. A standards-compliant Click IP router has 16 elements on its forwarding path; some of its elements are also useful in Ethernet switches and IP tunnelling configurations. Extending the IP router to support dropping policies, fairness among flows, or Differentiated Services simply requires adding a couple of element at the right place. On conventional PC hardware, the Click IP router achieves a maximum loss-free forwarding rate of 333,000 64-byte packets per second, demonstrating that Click's modular and flexible architecture is compatible with good performance.

Journal ArticleDOI
TL;DR: In this paper, the authors characterize the ''shareholder'' and ''stakeholder'' corporate governance models of common and code law countries respectively as resolving information asymmetry by public disclosure and private communication.

Journal ArticleDOI
13 Oct 2000-Science
TL;DR: In this article, the authors examined the competing dynamical processes involved in optical amplification and lasing in nanocrystal quantum dots and found that, despite a highly efficient intrinsic nonradiative Auger recombination, large optical gain can be developed at the wavelength of the emitting transition for close-packed solids of these dots.
Abstract: The development of optical gain in chemically synthesized semiconductor nanoparticles (nanocrystal quantum dots) has been intensely studied as the first step toward nanocrystal quantum dot lasers. We examined the competing dynamical processes involved in optical amplification and lasing in nanocrystal quantum dots and found that, despite a highly efficient intrinsic nonradiative Auger recombination, large optical gain can be developed at the wavelength of the emitting transition for close-packed solids of these dots. Narrowband stimulated emission with a pronounced gain threshold at wavelengths tunable with the size of the nanocrystal was observed, as expected from quantum confinement effects. These results unambiguously demonstrate the feasibility of nanocrystal quantum dot lasers.

Book
01 Jan 2000
TL;DR: In this paper, the authors present a model for making metal foams characterisation methods and properties of metal foam, and a constitutive model for metal foam design for Creep with Metal Foams Sandwich Structures Energy Management: Packaging and Blast Protection Sound Absorption and Vibration Suppression Thermal Management and Heat Transfer Electrical Properties of metal Foams Cutting, Finishing and Joining Cost Estimation and Viability Case Studies Suppliers of Metal Foam Web Sites Index
Abstract: Introduction Making Metal Foams Characterization Methods Properties of Metal Foams Design Analysis for Material Selection Design Formulae for Simple Structures A Constitutive Model for Metal Foams Design for Creep with Metal Foams Sandwich Structures Energy Management: Packaging and Blast Protection Sound Absorption and Vibration Suppression Thermal Management and Heat Transfer Electrical Properties of Metal Foams Cutting, Finishing and Joining Cost Estimation and Viability Case Studies Suppliers of Metal Foams Web Sites Index .

Journal ArticleDOI
TL;DR: In human tissues, normal homeostasis requires intricately balanced interactions between cells and the network of secreted proteins known as the extracellular matrix, which is clearly evident in the interactions mediated by the cytokine transforming growth factor β (TGF-β).
Abstract: In human tissues, normal homeostasis requires intricately balanced interactions between cells and the network of secreted proteins known as the extracellular matrix. These cooperative interactions involve numerous cytokines acting through specific cell-surface receptors. When the balance between the cells and the extracellular matrix is perturbed, disease can result. This is clearly evident in the interactions mediated by the cytokine transforming growth factor β (TGF-β). TGF-β is a member of a family of dimeric polypeptide growth factors that includes bone morphogenic proteins and activins. All of these growth factors share a cluster of conserved cysteine residues that form a common cysteine . . .

Journal ArticleDOI
TL;DR: The authors found that the impact of monetary policy on lending is stronger for banks with less liquid balance sheets, i.e., banks with lower ratios of securities to assets, and that this pattern is largely attributable to the smaller banks, those in the bottom 95 percent of the size distribution.
Abstract: We study the monetary-transmission mechanism with a data set that includes quarterly observations of every insured U.S. commercial bank from 1976 to 1993. We find that the impact of monetary policy on lending is stronger for banks with less liquid balance sheets--i.e., banks with lower ratios of securities to assets. Moreover, this pattern is largely attributable to the smaller banks, those in the bottom 95 percent of the size distribution. Our results support the existence of a "bank lending channel" of monetary transmission, though they do not allow us to make precise statements about its quantitative importance.

Posted Content
TL;DR: The authors argue that the behavior of wages and returns to schooling indicates that technical change has been skill-biased during the past sixty years and that the recent increase in inequality is most likely due to an acceleration in skill bias.
Abstract: This essay discusses the effect of technical change on wage inequality. I argue that the behavior of wages and returns to schooling indicates that technical change has been skill-biased during the past sixty years. Furthermore, the recent increase in inequality is most likely due to an acceleration in skill bias. In contrast to twentieth century developments, most technical change during the nineteenth century appears to be skill-replacing. I suggest that this is because the increased supply of unskilled workers in the English cities made the introduction of these technologies profitable. On the other hand, the twentieth-century has been characterized by skill-biased technical change because the rapid increase in the supply of skilled workers has induced the development of skill-complementary technologies. The recent acceleration in skill bias is in turn likely to have been a response to the acceleration in the supply of skills during the past several decades.

Journal ArticleDOI
15 Jun 2000-Nature
TL;DR: Proteomics can be divided into three main areas: protein micro-characterization for large-scale identification of proteins and their post-translational modifications; ‘differential display’ proteomics for comparison of protein levels with potential application in a wide range of diseases; and studies of protein–protein interactions using techniques such as mass spectrometry or the yeast two-hybrid system.
Abstract: Proteomics, the large-scale analysis of proteins, will contribute greatly to our understanding of gene function in the post-genomic era. Proteomics can be divided into three main areas: (1) protein micro-characterization for large-scale identification of proteins and their post-translational modifications; (2) 'differential display' proteomics for comparison of protein levels with potential application in a wide range of diseases; and (3) studies of protein-protein interactions using techniques such as mass spectrometry or the yeast two-hybrid system. Because it is often difficult to predict the function of a protein based on homology to other proteins or even their three-dimensional structure, determination of components of a protein complex or of a cellular structure is central in functional analysis. This aspect of proteomic studies is perhaps the area of greatest promise. After the revolution in molecular biology exemplified by the ease of cloning by DNA methods, proteomics will add to our understanding of the biochemistry of proteins, processes and pathways for years to come.

Journal ArticleDOI
25 Jun 2000
TL;DR: It is shown that QIM is "provably good" against arbitrary bounded and fully informed attacks, and achieves provably better rate distortion-robustness tradeoffs than currently popular spread-spectrum and low-bit(s) modulation methods.
Abstract: We consider the problem of embedding one signal (e.g., a digital watermark), within another "host" signal to form a third, "composite" signal. The embedding is designed to achieve efficient tradeoffs among the three conflicting goals of maximizing the information-embedding rate, minimizing the distortion between the host signal and composite signal, and maximizing the robustness of the embedding. We introduce new classes of embedding methods, termed quantization index modulation (QIM) and distortion-compensated QIM (DC-QIM), and develop convenient realizations in the form of what we refer to as dither modulation. Using deterministic models to evaluate digital watermarking methods, we show that QIM is "provably good" against arbitrary bounded and fully informed attacks, which arise in several copyright applications, and in particular it achieves provably better rate distortion-robustness tradeoffs than currently popular spread-spectrum and low-bit(s) modulation methods. Furthermore, we show that for some important classes of probabilistic models, DC-QIM is optimal (capacity-achieving) and regular QIM is near-optimal. These include both additive white Gaussian noise (AWGN) channels, which may be good models for hybrid transmission applications such as digital audio broadcasting, and mean-square-error-constrained attack channels that model private-key watermarking applications.

Journal ArticleDOI
TL;DR: In this article, the authors provide an analytical treatment of a class of transforms, including various Laplace and Fourier transforms as special cases, that allow an analytical Treatment of a range of valuation and econometric problems.
Abstract: In the setting of ‘‘affine’’ jump-diffusion state processes, this paper provides an analytical treatment of a class of transforms, including various Laplace and Fourier transforms as special cases, that allow an analytical treatment of a range of valuation and econometric problems. Example applications include fixed-income pricing models, with a role for intensity-based models of default, as well as a wide range of option-pricing applications. An illustrative example examines the implications of stochastic volatility and jumps for option valuation. This example highlights the impact on option ‘smirks’ of the joint distribution of jumps in volatility and jumps in the underlying asset price, through both jump amplitude as well as jump timing.

Journal ArticleDOI
TL;DR: The authors empirically analyzes the characteristics of the Internet as a channel for two categories of homogeneous products-books and CDs-using a data set of over 8,500 price observations collected over a period of 15 months, comparing pricing behavior at 41 Internet and conventional retail outlets.
Abstract: There have been many claims that the Internet represents a new nearly "frictionless market." Our research empirically analyzes the characteristics of the Internet as a channel for two categories of homogeneous products-books and CDs. Using a data set of over 8,500 price observations collected over a period of 15 months, we compare pricing behavior at 41 Internet and conventional retail outlets. We find that prices on the Internet are 9-16% lower than prices in conventional outlets, depending on whether taxes, shipping, and shopping costs are included in the price. Additionally, we find that Internet retailers' price adjustments over time are up to 100 times smaller than conventional retailers' price adjustments-presumably reflecting lower menu costs in Internet channels. We also find that levels of price dispersion depend importantly on the measures employed. When we compare the prices posted by different Internet retailers we find substantial dispersion. Internet retailer prices differ by an average of 33% for books and 25% for CDs. However, when we weight these prices by proxies for market share, we find dispersion is lower in Internet channels than in conventional channels, reflecting the dominance of certain heavily branded retailers. We conclude that while there is lower friction in many dimensions of Internet competition, branding, awareness, and trust remain important sources of heterogeneity among Internet retailers.

Posted Content
TL;DR: In this article, the authors present an empirical examination of the determinants of country-level production of international patents and introduce a novel framework based on the concept of national innovative capacity, the ability of a country to produce and commercialize a flow of innovative technology over the long term.
Abstract: Motivated by differences in R&D productivity across advanced economies, this paper presents an empirical examination of the determinants of country-level production of international patents. We introduce a novel framework based on the concept of national innovative capacity. National innovative capacity is the ability of a country to produce and commercialize a flow of innovative technology over the long term. National innovative capacity depends on the strength of a nation's common innovation infrastructure (cross-cutting factors which contribute broadly to innovativeness throughout the economy), the environment for innovation in its leading industrial clusters, and the strength of linkages between these two areas. We use this framework to guide our empirical exploration into the determinants of country-level R&D productivity, specifically examining the relationship between international patenting (patenting by foreign countries in the United States) and variables associated with the national innovative capacity framework. While acknowledging important measurement issues arising from the use of patent data, we provide evidence for several findings. First, the production function for international patents is surprisingly well-characterized by a small but relatively nuanced set of observable factors, including R&D manpower and spending, aggregate policy choices such as the extent of IP protection and openness to international trade, and the share of research performed by the academic sector and funded by the private sector. As well, international patenting productivity depends on each individual country's knowledge stock.' Further, the predicted level of national innovative capacity has an important impact on more downstream commercialization and diffusion activities (such as achieving a high market share of high-technology export markets). Finally, there has been convergence among OECD countries in terms of the estimated level of innovative capacity over the past quarter century.

Journal ArticleDOI
TL;DR: In this paper, the authors used an improved data set on income inequality which not only reduces measurement error, but also allows estimation via a panel technique, and found that in the short and medium term, an increase in a country's level of income inequality has a significant positive relationship with subsequent economic growth.
Abstract: This paper challenges the current belief that income inequality has a negative relationship with economic growth. It uses an improved data set on income inequality which not only reduces measurement error, but also allows estimation via a panel technique. Results suggest that in the short and medium term, an increase in a country's level of income inequality has a significant positive relationship with subsequent economic growth.

Journal ArticleDOI
TL;DR: In this paper, the authors explain the extent of exchange rate depreciation and stock market decline better than do standard macroeconomic measures using measures of corporate governance, particularly the effectiveness of protection for minority shareholders.

Journal ArticleDOI
13 Oct 2000-Science
TL;DR: It is concluded that although natural processes can potentially slow the rate of increase in atmospheric CO2, there is no natural "savior" waiting to assimilate all the anthropogenically produced CO2 in the coming century.
Abstract: :Motivated by the rapid increase in atmospheric CO2 due to human activities since the Industrial Revolution, several international scientific research programs have analyzed the role of individual components of the Earth system in the global carbon cycle. Our knowledge of the carbon cycle within the oceans, terrestrial ecosystems, and the atmosphere is sufficiently extensive to permit us to conclude that although natural processes can potentially slow the rate of increase in atmospheric CO 2, there is no natural “savior” waiting to assimilate all the anthropogenically produced CO 2 in the coming century. Our knowledge is insufficient to describe the interactions between the components of the Earth system and the relationship between the carbon cycle and other biogeochemical and climatological processes. Overcoming this limitation requires a systems approach.

Journal ArticleDOI
TL;DR: A real-time computer vision and machine learning system for modeling and recognizing human behaviors in a visual surveillance task and demonstrates the ability to use these a priori models to accurately classify real human behaviors and interactions with no additional tuning or training.
Abstract: We describe a real-time computer vision and machine learning system for modeling and recognizing human behaviors in a visual surveillance task. The system deals in particularly with detecting when interactions between people occur and classifying the type of interaction. Examples of interesting interaction behaviors include following another person, altering one's path to meet another, and so forth. Our system combines top-down with bottom-up information in a closed feedback loop, with both components employing a statistical Bayesian approach. We propose and compare two different state-based learning architectures, namely, HMMs and CHMMs for modeling behaviors and interactions. Finally, a synthetic "Alife-style" training system is used to develop flexible prior models for recognizing human interactions. We demonstrate the ability to use these a priori models to accurately classify real human behaviors and interactions with no additional tuning or training.