scispace - formally typeset
Search or ask a question

Showing papers by "Stanford University published in 2004"


Book
01 Mar 2004
TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Abstract: Convex optimization problems arise frequently in many different fields. A comprehensive introduction to the subject, this book shows in detail how such problems can be solved numerically with great efficiency. The focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them. The text contains many worked examples and homework exercises and will appeal to students, researchers and practitioners in fields such as engineering, computer science, mathematics, statistics, finance, and economics.

33,341 citations


Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations


Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations


Journal ArticleDOI
22 Apr 2004-Nature
TL;DR: Reliable quantification of the leaf economics spectrum and its interaction with climate will prove valuable for modelling nutrient fluxes and vegetation boundaries under changing land-use and climate.
Abstract: Bringing together leaf trait data spanning 2,548 species and 175 sites we describe, for the first time at global scale, a universal spectrum of leaf economics consisting of key chemical, structural and physiological properties. The spectrum runs from quick to slow return on investments of nutrients and dry mass in leaves, and operates largely independently of growth form, plant functional type or biome. Categories along the spectrum would, in general, describe leaf economic variation at the global scale better than plant functional types, because functional types overlap substantially in their leaf traits. Overall, modulation of leaf traits and trait relationships by climate is surprisingly modest, although some striking and significant patterns can be seen. Reliable quantification of the leaf economics spectrum and its interaction with climate will prove valuable for modelling nutrient fluxes and vegetation boundaries under changing land-use and climate.

6,360 citations


Journal ArticleDOI
TL;DR: This article examines health promotion and disease prevention from the perspective of social cognitive theory, a multifaceted causal structure in which self-efficacy beliefs operate together with goals, outcome expectations, and perceived environmental impediments and facilitators in the regulation of human motivation, behavior, and well-being.
Abstract: This article examines health promotion and disease prevention from the perspective of social cognitive theory. This theory posits a multifaceted causal structure in which self-efficacy beliefs operate together with goals, outcome expectations, and perceived environmental impediments and facilitators in the regulation of human motivation, behavior, and well-being. Belief in one’s efficacy to exercise control is a common pathway through which psychosocial influences affect health functioning. This core belief affects each of the basic processes of personal change—whether people even consider changing their health habits, whether they mobilize the motivation and perseverance needed to succeed should they do so, their ability to recover from setbacks and relapses, and how well they maintain the habit changes they have achieved. Human health is a social matter, not just an individual one. A comprehensive approach to health promotion also requires changing the practices of social systems that have widespread effects on human health.

6,004 citations


Journal ArticleDOI
TL;DR: A generative model for documents is described, introduced by Blei, Ng, and Jordan, and a Markov chain Monte Carlo algorithm is presented for inference in this model, which is used to analyze abstracts from PNAS by using Bayesian model selection to establish the number of topics.
Abstract: A first step in identifying the content of a document is determining which topics that document addresses. We describe a generative model for documents, introduced by Blei, Ng, and Jordan [Blei, D. M., Ng, A. Y. & Jordan, M. I. (2003) J. Machine Learn. Res. 3, 993-1022], in which each document is generated by choosing a distribution over topics and then choosing each word in the document from a topic selected according to this distribution. We then present a Markov chain Monte Carlo algorithm for inference in this model. We use this algorithm to analyze abstracts from PNAS by using Bayesian model selection to establish the number of topics. We show that the extracted topics capture meaningful structure in the data, consistent with the class designations provided by the authors of the articles, and outline further applications of this analysis, including identifying “hot topics” by examining temporal dynamics and tagging abstracts to illustrate semantic content.

5,680 citations


Journal ArticleDOI
TL;DR: The data reveal that multiple extracellular, cytoplasmic, and nuclear regulators intricately modulate Wnt signaling levels, and that receptor-ligand specificity and feedback loops help to determine WNT signaling outputs.
Abstract: Tight control of cell-cell communication is essential for the generation of a normally patterned embryo. A critical mediator of key cell-cell signaling events during embryogenesis is the highly conserved Wnt family of secreted proteins. Recent biochemical and genetic analyses have greatly enriched our understanding of how Wnts signal, and the list of canonical Wnt signaling components has exploded. The data reveal that multiple extracellular, cytoplasmic, and nuclear regulators intricately modulate Wnt signaling levels. In addition, receptor-ligand specificity and feedback loops help to determine Wnt signaling outputs. Wnts are required for adult tissue maintenance, and perturbations in Wnt signaling promote both human degenerative diseases and cancer. The next few years are likely to see novel therapeutic reagents aimed at controlling Wnt signaling in order to alleviate these conditions.

5,129 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compared the natural and anthropogenic controls on the conversion of unreactive N2 to more reactive forms of nitrogen (Nr) and found that human activities increasingly dominate the N budget at the global and at most regional scales, and the terrestrial and open ocean N budgets are essentially dis-connected.
Abstract: This paper contrasts the natural and anthropogenic controls on the conversion of unreactive N2 to more reactive forms of nitrogen (Nr). A variety of data sets are used to construct global N budgets for 1860 and the early 1990s and to make projections for the global N budget in 2050. Regional N budgets for Asia, North America, and other major regions for the early 1990s, as well as the marine N budget, are presented to highlight the dominant fluxes of nitrogen in each region. Important findings are that human activities increasingly dominate the N budget at the global and at most regional scales, the terrestrial and open ocean N budgets are essentially dis- connected, and the fixed forms of N are accumulating in most environmental reservoirs. The largest uncertainties in our understanding of the N budget at most scales are the rates of natural biological nitrogen fixation, the amount of Nr storage in most environmental reservoirs, and the production rates of N2 by denitrification.

4,555 citations


Journal ArticleDOI
30 Sep 2004-Neuron
TL;DR: This work reviews those forms of LTP and LTD for which mechanisms have been most firmly established and examples are provided that show how these mechanisms can contribute to experience-dependent modifications of brain function.

3,767 citations


Journal ArticleDOI
Midori A. Harris, Jennifer I. Clark1, Ireland A1, Jane Lomax1, Michael Ashburner2, Michael Ashburner1, R. Foulger2, R. Foulger1, Karen Eilbeck3, Karen Eilbeck1, Suzanna E. Lewis1, Suzanna E. Lewis3, B. Marshall3, B. Marshall1, Christopher J. Mungall3, Christopher J. Mungall1, J. Richter3, J. Richter1, Gerald M. Rubin3, Gerald M. Rubin1, Judith A. Blake1, Carol J. Bult1, Dolan M1, Drabkin H1, Janan T. Eppig1, Hill Dp1, L. Ni1, Ringwald M1, Rama Balakrishnan4, Rama Balakrishnan1, J. M. Cherry4, J. M. Cherry1, Karen R. Christie1, Karen R. Christie4, Maria C. Costanzo1, Maria C. Costanzo4, Selina S. Dwight1, Selina S. Dwight4, Stacia R. Engel1, Stacia R. Engel4, Dianna G. Fisk1, Dianna G. Fisk4, Jodi E. Hirschman4, Jodi E. Hirschman1, Eurie L. Hong1, Eurie L. Hong4, Robert S. Nash1, Robert S. Nash4, Anand Sethuraman1, Anand Sethuraman4, Chandra L. Theesfeld1, Chandra L. Theesfeld4, David Botstein1, David Botstein5, Kara Dolinski5, Kara Dolinski1, Becket Feierbach1, Becket Feierbach5, Tanya Z. Berardini6, Tanya Z. Berardini1, S. Mundodi6, S. Mundodi1, Seung Y. Rhee1, Seung Y. Rhee6, Rolf Apweiler1, Daniel Barrell1, Camon E1, E. Dimmer1, Lee1, Rex L. Chisholm, Pascale Gaudet7, Pascale Gaudet1, Warren A. Kibbe7, Warren A. Kibbe1, Ranjana Kishore8, Ranjana Kishore1, Erich M. Schwarz1, Erich M. Schwarz8, Paul W. Sternberg1, Paul W. Sternberg8, M. Gwinn1, Hannick L1, Wortman J1, Matthew Berriman1, Matthew Berriman9, Wood1, Wood9, de la Cruz N1, de la Cruz N10, Peter J. Tonellato10, Peter J. Tonellato1, Pankaj Jaiswal11, Pankaj Jaiswal1, Seigfried T12, Seigfried T1, White R1, White R13 
TL;DR: The Gene Ontology (GO) project as discussed by the authors provides structured, controlled vocabularies and classifications that cover several domains of molecular and cellular biology and are freely available for community use in the annotation of genes, gene products and sequences.
Abstract: The Gene Ontology (GO) project (http://www.geneontology.org/) provides structured, controlled vocabularies and classifications that cover several domains of molecular and cellular biology and are freely available for community use in the annotation of genes, gene products and sequences. Many model organism databases and genome annotation groups use the GO and contribute their annotation sets to the GO resource. The GO database integrates the vocabularies and contributed annotations and provides full access to this information in several formats. Members of the GO Consortium continually work collectively, involving outside experts as needed, to expand and update the GO vocabularies. The GO Web resource also provides access to extensive documentation about the GO project and links to applications that use GO data for functional analyses.

3,565 citations


Journal ArticleDOI
TL;DR: A goodness-of-fit analysis applied at the individual subject level suggests that activity in the default-mode network may ultimately prove a sensitive and specific biomarker for incipient AD.
Abstract: Recent functional imaging studies have revealed coactivation in a distributed network of cortical regions that characterizes the resting state, or default mode, of the human brain. Among the brain regions implicated in this network, several, including the posterior cingulate cortex and inferior parietal lobes, have also shown decreased metabolism early in the course of Alzheimer's disease (AD). We reasoned that default-mode network activity might therefore be abnormal in AD. To test this hypothesis, we used independent component analysis to isolate the network in a group of 13 subjects with mild AD and in a group of 13 age-matched elderly controls as they performed a simple sensory-motor processing task. Three important findings are reported. Prominent coactivation of the hippocampus, detected in all groups, suggests that the default-mode network is closely involved with episodic memory processing. The AD group showed decreased resting-state activity in the posterior cingulate and hippocampus, suggesting that disrupted connectivity between these two regions accounts for the posterior cingulate hypometabolism commonly detected in positron emission tomography studies of early AD. Finally, a goodness-of-fit analysis applied at the individual subject level suggests that activity in the default-mode network may ultimately prove a sensitive and specific biomarker for incipient AD.

Posted Content
TL;DR: In this paper, a new type of identity-based encryption called Fuzzy Identity-Based Encryption (IBE) was introduced, where an identity is viewed as set of descriptive attributes, and a private key for an identity can decrypt a ciphertext encrypted with an identity if and only if the identities are close to each other as measured by the set overlap distance metric.
Abstract: We introduce a new type of Identity-Based Encryption (IBE) scheme that we call Fuzzy Identity-Based Encryption. In Fuzzy IBE we view an identity as set of descriptive attributes. A Fuzzy IBE scheme allows for a private key for an identity, ω, to decrypt a ciphertext encrypted with an identity, ω ′, if and only if the identities ω and ω ′ are close to each other as measured by the “set overlap” distance metric. A Fuzzy IBE scheme can be applied to enable encryption using biometric inputs as identities; the error-tolerance property of a Fuzzy IBE scheme is precisely what allows for the use of biometric identities, which inherently will have some noise each time they are sampled. Additionally, we show that Fuzzy-IBE can be used for a type of application that we term “attribute-based encryption”. In this paper we present two constructions of Fuzzy IBE schemes. Our constructions can be viewed as an Identity-Based Encryption of a message under several attributes that compose a (fuzzy) identity. Our IBE schemes are both error-tolerant and secure against collusion attacks. Additionally, our basic construction does not use random oracles. We prove the security of our schemes under the Selective-ID security model.

Proceedings ArticleDOI
04 Jul 2004
TL;DR: This work thinks of the expert as trying to maximize a reward function that is expressible as a linear combination of known features, and gives an algorithm for learning the task demonstrated by the expert, based on using "inverse reinforcement learning" to try to recover the unknown reward function.
Abstract: We consider learning in a Markov decision process where we are not explicitly given a reward function, but where instead we can observe an expert demonstrating the task that we want to learn to perform. This setting is useful in applications (such as the task of driving) where it may be difficult to write down an explicit reward function specifying exactly how different desiderata should be traded off. We think of the expert as trying to maximize a reward function that is expressible as a linear combination of known features, and give an algorithm for learning the task demonstrated by the expert. Our algorithm is based on using "inverse reinforcement learning" to try to recover the unknown reward function. We show that our algorithm terminates in a small number of iterations, and that even though we may never recover the expert's reward function, the policy output by the algorithm will attain performance close to that of the expert, where here performance is measured with respect to the expert's unknown reward function.

Proceedings ArticleDOI
08 Jun 2004
TL;DR: A novel Locality-Sensitive Hashing scheme for the Approximate Nearest Neighbor Problem under lp norm, based on p-stable distributions that improves the running time of the earlier algorithm and yields the first known provably efficient approximate NN algorithm for the case p<1.
Abstract: We present a novel Locality-Sensitive Hashing scheme for the Approximate Nearest Neighbor Problem under lp norm, based on p-stable distributions.Our scheme improves the running time of the earlier algorithm for the case of the lp norm. It also yields the first known provably efficient approximate NN algorithm for the case p

Book ChapterDOI
02 May 2004
TL;DR: This work defines and construct a mechanism that enables Alice to provide a key to the gateway that enables the gateway to test whether the word “urgent” is a keyword in the email without learning anything else about the email.
Abstract: We study the problem of searching on data that is encrypted using a public key system. Consider user Bob who sends email to user Alice encrypted under Alice’s public key. An email gateway wants to test whether the email contains the keyword “urgent” so that it could route the email accordingly. Alice, on the other hand does not wish to give the gateway the ability to decrypt all her messages. We define and construct a mechanism that enables Alice to provide a key to the gateway that enables the gateway to test whether the word “urgent” is a keyword in the email without learning anything else about the email. We refer to this mechanism as Public Key Encryption with keyword Search. As another example, consider a mail server that stores various messages publicly encrypted for Alice by others. Using our mechanism Alice can send the mail server a key that will enable the server to identify all messages containing some specific keyword, but learn nothing else. We define the concept of public key encryption with keyword search and give several constructions.

Journal ArticleDOI
TL;DR: Java Treeview as mentioned in this paper is an open-source, cross-platform rewrite that handles very large datasets well, and supports extensions to the file format that allow the results of additional analysis to be visualized and compared.
Abstract: Summary: Open source software encourages innovation by allowing users to extend the functionality of existing applications. Treeview is a popular application for the visualization of microarray data, but is closed-source and platform-specific, which limits both its current utility and suitability as a platform for further development. Java Treeview is an open-source, cross-platform rewrite that handles very large datasets well, and supports extensions to the file format that allow the results of additional analysis to be visualized and compared. The combination of a general file format and open source makes Java Treeview an attractive choice for solving a class of visualization problems. An applet version is also available that can be used on any website with no special server-side setup. Availability:http://jtreeview.sourceforge.net under GPL.

Journal ArticleDOI
TL;DR: Internet data collection methods, with a focus on self-report questionnaires from self-selected samples, are evaluated and compared with traditional paper-and-pencil methods and it is concluded that Internet methods can contribute to many areas of psychology.
Abstract: The rapid growth of the Internet provides a wealth of new research opportunities for psychologists. Internet data collection methods, with a focus on self-report questionnaires from self-selected samples, are evaluated and compared with traditional paper-and-pencil methods. Six preconceptions about Internet samples and data quality are evaluated by comparing a new large Internet sample (N = 361,703) with a set of 510 published traditional samples. Internet samples are shown to be relatively diverse with respect to gender, socioeconomic status, geographic region, and age. Moreover, Internet findings generalize across presentation formats, are not adversely affected by nonserious or repeat responders, and are consistent with findings from traditional methods. It is concluded that Internet methods can contribute to many areas of psychology.

Journal ArticleDOI
TL;DR: This work considers the problem of finding a linear iteration that yields distributed averaging consensus over a network, i.e., that asymptotically computes the average of some initial values given at the nodes, and gives several extensions and variations on the basic problem.

Journal ArticleDOI
01 Nov 2004-Sleep
TL;DR: In adults, it appeared that sleep latency, percentages of stage 1 and stage 2 significantly increased with age while percentage of REM sleep decreased, and effect sizes for the different sleep parameters were greatly modified by the quality of subject screening, diminishing or even masking age associations with differentSleep parameters.
Abstract: Objectives: The purposes of this study were to identify age-related changes in objectively recorded sleep patterns across the human life span in healthy individuals and to clarify whether sleep latency and percentages of stage 1, stage 2, and rapid eye movement (REM) sleep significantly change with age. Design: Review of literature of articles published between 1960 and 2003 in peer-reviewed journals and meta-analysis. Participants: 65 studies representing 3,577 subjects aged 5 years to 102 years. Measurement: The research reports included in this meta-analysis met the following criteria: (1) included nonclinical participants aged 5 years or older; (2) included measures of sleep characteristics by “all night” polysomnography or actigraphy on sleep latency, sleep efficiency, total sleep time, stage 1 sleep, stage 2 sleep, slow-wave sleep, REM sleep, REM latency, or minutes awake after sleep onset; (3) included numeric presentation of the data; and (4) were published between 1960 and 2003 in peer-reviewed journals. Results: In children and adolescents, total sleep time decreased with age only in studies performed on school days. Percentage of slow-wave sleep was significantly negatively correlated with age. Percentages of stage 2 and REM sleep significantly changed with age. In adults, total sleep time, sleep efficiency, percentage of slow-wave sleep, percentage of REM sleep, and REM latency all significantly decreased with age, while sleep latency, percentage of stage 1 sleep, percentage of stage 2 sleep, and wake after sleep onset significantly increased with age. However, only sleep efficiency continued to significantly decrease after 60 years of age. The magnitudes of the effect sizes noted changed depending on whether or not studied participants were screened for mental disorders, organic diseases, use of drug or alcohol, obstructive sleep apnea syndrome, or other sleep disorders. Conclusions: In adults, it appeared that sleep latency, percentages of stage 1 and stage 2 significantly increased with age while percentage of REM sleep decreased. However, effect sizes for the different sleep parameters were greatly modified by the quality of subject screening, diminishing or even masking age associations with different sleep parameters. The number of studies that examined the evolution of sleep parameters with age are scant among school-aged children, adolescents, and middle-aged adults. There are also very few studies that examined the effect of race on polysomnographic sleep parameters.

Journal ArticleDOI
TL;DR: A panel of four antibodies (ER, HER1, HER2, and cytokeratin 5/6) can accurately identify basal-like tumors using standard available clinical tools and shows high specificity.
Abstract: Purpose: Expression profiling studies classified breast carcinomas into estrogen receptor (ER)+/luminal, normal breast-like, HER2 overexpressing, and basal-like groups, with the latter two associated with poor outcomes. Currently, there exist clinical assays that identify ER+/luminal and HER2-overexpressing tumors, and we sought to develop a clinical assay for breast basal-like tumors. Experimental Design: To identify an immunohistochemical profile for breast basal-like tumors, we collected a series of known basal-like tumors and tested them for protein patterns that are characteristic of this subtype. Next, we examined the significance of these protein patterns using tissue microarrays and evaluated the prognostic significance of these findings. Results: Using a panel of 21 basal-like tumors, which was determined using gene expression profiles, we saw that this subtype was typically immunohistochemically negative for estrogen receptor and HER2 but positive for basal cytokeratins, HER1, and/or c-KIT. Using breast carcinoma tissue microarrays representing 930 patients with 17.4-year mean follow-up, basal cytokeratin expression was associated with low disease-specific survival. HER1 expression was observed in 54% of cases positive for basal cytokeratins ( versus 11% of negative cases) and was associated with poor survival independent of nodal status and size. c-KIT expression was more common in basal-like tumors than in other breast cancers but did not influence prognosis. Conclusions: A panel of four antibodies (ER, HER1, HER2, and cytokeratin 5/6) can accurately identify basal-like tumors using standard available clinical tools and shows high specificity. These studies show that many basal-like tumors express HER1, which suggests candidate drugs for evaluation in these patients.

Journal ArticleDOI
TL;DR: Solid tumours contain regions at very low oxygen concentrations (hypoxia), often surrounding areas of necrosis, which provides an opportunity for tumour-selective therapy, including prodrugs activated by Hypoxia, hypoxia-specific gene therapy, targeting the hypoxIA-inducible factor 1 transcription factor, and recombinant anaerobic bacteria.
Abstract: Solid tumours contain regions at very low oxygen concentrations (hypoxia), often surrounding areas of necrosis. The cells in these hypoxic regions are resistant to both radiotherapy and chemotherapy. However, the existence of hypoxia and necrosis also provides an opportunity for tumour-selective therapy, including prodrugs activated by hypoxia, hypoxia-specific gene therapy, targeting the hypoxia-inducible factor 1 transcription factor, and recombinant anaerobic bacteria. These strategies could turn what is now an impediment into a significant advantage for cancer therapy.

Journal ArticleDOI
05 Mar 2004-Science
TL;DR: Evidence is assembled of possible interrelations between Wnt and other growth factor signaling, β-catenin functions, and cadherin-mediated adhesion in tissue differentiation.
Abstract: The specification and proper arrangements of new cell types during tissue differentiation require the coordinated regulation of gene expression and precise interactions between neighboring cells. Of the many growth factors involved in these events, Wnts are particularly interesting regulators, because a key component of their signaling pathway, β-catenin, also functions as a component of the cadherin complex, which controls cell-cell adhesion and influences cell migration. Here, we assemble evidence of possible interrelations between Wnt and other growth factor signaling, β-catenin functions, and cadherin-mediated adhesion.

Book
01 Jan 2004
TL;DR: This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system and shows that similar stability is also available using the basis and the matching pursuit algorithms.
Abstract: Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system. Considering an ideal underlying signal that has a sufficiently sparse representation, it is assumed that only a noisy version of it can be observed. Assuming further that the overcomplete system is incoherent, it is shown that the optimally sparse approximation to the noisy data differs from the optimally sparse decomposition of the ideal noiseless signal by at most a constant multiple of the noise level. As this optimal-sparsity method requires heavy (combinatorial) computational effort, approximation algorithms are considered. It is shown that similar stability is also available using the basis and the matching pursuit algorithms. Furthermore, it is shown that these methods result in sparse approximation of the noisy data that contains only terms also appearing in the unique sparsest representation of the ideal noiseless sparse signal.

Journal ArticleDOI
Elise A. Feingold1, Peter J. Good1, Mark S. Guyer1, S. Kamholz1  +193 moreInstitutions (19)
22 Oct 2004-Science
TL;DR: The ENCyclopedia Of DNA Elements (ENCODE) Project is organized as an international consortium of computational and laboratory-based scientists working to develop and apply high-throughput approaches for detecting all sequence elements that confer biological function.
Abstract: The ENCyclopedia Of DNA Elements (ENCODE) Project aims to identify all functional elements in the human genome sequence. The pilot phase of the Project is focused on a specified 30 megabases (∼1%) of the human genome sequence and is organized as an international consortium of computational and laboratory-based scientists working to develop and apply high-throughput approaches for detecting all sequence elements that confer biological function. The results of this pilot phase will guide future efforts to analyze the entire human genome.

MonographDOI
16 Dec 2004
TL;DR: The second edition of The Biomarker Guide as mentioned in this paper provides a comprehensive account of the role that biomarker technology plays both in petroleum exploration and in understanding Earth history and processes.
Abstract: The second edition of The Biomarker Guide is a fully updated and expanded version of this essential reference. Now in two volumes, it provides a comprehensive account of the role that biomarker technology plays both in petroleum exploration and in understanding Earth history and processes. Biomarkers and Isotopes in the Environment and Human History details the origins of biomarkers and introduces basic chemical principles relevant to their study. It discusses analytical techniques, and applications of biomarkers to environmental and archaeological problems. The Biomarker Guide is an invaluable resource for geologists, petroleum geochemists, biogeochemists, environmental scientists and archaeologists.

Journal ArticleDOI
08 Nov 2004
TL;DR: An overview of MIMO wireless technology covering channel models, performance limits, coding, and transceiver design is provided, in principle, to meet the 1 Gb/s data rate requirement with a single-transmit single-receive antenna wireless system.
Abstract: High data rate wireless communications, nearing 1 Gb/s transmission rates, is of interest in emerging wireless local area networks and home audio/visual networks. Designing very high speed wireless links that offer good quality-of-service and range capability in non-line-of-sight (NLOS) environments constitutes a significant research and engineering challenge. Ignoring fading in NLOS environments, we can, in principle, meet the 1 Gb/s data rate requirement with a single-transmit single-receive antenna wireless system if the product of bandwidth (measured in hertz) and spectral efficiency (measured in bits per second per hertz) is equal to 10/sup 9/. A variety of cost, technology and regulatory constraints make such a brute force solution unattractive, if not impossible. The use of multiple antennas at transmitter and receiver, popularly known as multiple-input multiple-output (MIMO) wireless, is an emerging cost-effective technology that offers substantial leverages in making 1 Gb/s wireless links a reality. The paper provides an overview of MIMO wireless technology covering channel models, performance limits, coding, and transceiver design.

Journal Article
TL;DR: Building an ambidextrous organizations allows executives to pioneer radical or disruptive innovations while also pursuing incremental gains, and the structure itself, combining organizational separation with senior team integration, is not difficult to understand.
Abstract: Corporate executives must constantly look backward, attending to the products and processes of the past, while also gazing forward, preparing for the innovations that will define the future. This mental balancing act is one of the toughest of all managerial challenges--it requires executives to explore new opportunities even as they work diligently to exploit existing capabilities--and it's no surprise that few companies do it well. But as every businessperson knows, there are companies that do. What's their secret? These organizations separate their new, exploratory units from their traditional, exploitative ones, allowing them to have different processes, structures, and cultures; at the same time, they maintain tight links across units at the senior executive level. Such "ambidextrous organizations," as the authors call them, allow executives to pioneer radical or disruptive innovations while also pursuing incremental gains. Of utmost importance to the ambidextrous organization are ambidextrous managers--executives who have the ability to understand and be sensitive to the needs of very different kinds of businesses. They possess the attributes of rigorous cost cutters and free-thinking entrepreneurs while also maintaining the objectivity required to make difficult trade-offs. Almost every company needs to renew itself through the creation of breakthrough products and processes, but it shouldn't do so at the expense of its traditional business. Building an ambidextrous organization is by no means easy, but the structure itself, combining organizational separation with senior team integration, is not difficult to understand. Given the executive will to make it happen, any company can become ambidextrous.

Journal ArticleDOI
TL;DR: In this paper, the authors showed that the photogenerated excitons are usually not split by the built-in electric field, which arises from differences in the electrode work functions.
Abstract: Conjugated polymers are attractive semiconductors for photovoltaic cells because they are strong absorbers and can be deposited on flexible substrates at low cost. Cells made with a single polymer and two electrodes tend to be inefficient because the photogenerated excitons are usually not split by the built-in electric field, which arises from differences in the electrode work functions. The efficiency can be increased by splitting the excitons at an interface between two semiconductors with offset energy levels. Power conversion efficiencies of almost 4% have been achieved by blending polymers with electron-accepting materials such as C60 derivatives, cadmium selenide, and titanium dioxide. We predict that efficiencies higher than 10% can be achieved by optimizing the cell's architecture to promote efficient exciton splitting and charge transport and by reducing the band gap of the polymer so that a larger fraction of the solar spectrum can be absorbed.

Journal ArticleDOI
TL;DR: Disjoint planning horizons are shown to be possible which eliminate the necessity of having data for the full N periods and desire a minimum total cost inventory management scheme which satisfies known demand in every period.
Abstract: (This article originally appeared in Management Science, October 1958, Volume 5, Number 1, pp. 89-96, published by The Institute of Management Sciences.) A forward algorithm for a solution to the following dynamic version of the economic lot size model is given: allowing the possibility of demands for a single item, inventory holding charges, and setup costs to vary over N periods, we desire a minimum total cost inventory management scheme which satisfies known demand in every period. Disjoint planning horizons are shown to be possible which eliminate the necessity of having data for the full N periods.

Journal ArticleDOI
TL;DR: It is argued that the spillovers that result from proprietary alliances are a function of the institutional commitments and practices of members of the network, and the relative accessibility of knowledge transferred through contractual linkages determines whether innovation benefits accrue broadly to membership in a coherent network component or narrowly to centrality.
Abstract: We contend that two important, nonrelational, features of formal interorganizational networks-geographic propinquity and organizational form-fundamentally alter the flow of information through a network. Within regional economies, contractual linkages among physically proximate organizations represent relatively transparent channels for information transfer because they are embedded in an ecology rich in informal and labor market transmission mechanisms. Similarly, we argue that the spillovers that result from proprietary alliances are a function of the institutional commitments and practices of members of the network. When the dominant nodes in an innovation network are committed to open regimes of information disclosure, the entire structure is characterized by less tightly monitored ties. The relative accessibility of knowledge transferred through contractual linkages to organizations determines whether innovation benefits accrue broadly to membership in a coherent network component or narrowly to centrality. We draw on novel network visualization methods and conditional fixed effects negative binomial regressions to test these arguments for human therapeutic biotechnology firms located in the Boston metropolitan area.