scispace - formally typeset
Search or ask a question
Author

Ron Weiss

Bio: Ron Weiss is an academic researcher from Massachusetts Institute of Technology. The author has contributed to research in topics: Synthetic biology & Speech synthesis. The author has an hindex of 82, co-authored 292 publications receiving 89189 citations. Previous affiliations of Ron Weiss include French Institute for Research in Computer Science and Automation & Google.


Papers
More filters
Journal ArticleDOI
15 Jul 2021
TL;DR: In this article, a combination of signal-to-noise ratio (SNR), area under a receiver operating characteristic curve (AUC), and fold change (FC) was used to quantitatively define digitizer performance and predict responses to different input signals.
Abstract: Many synthetic gene circuits are restricted to single-use applications or require iterative refinement for incorporation into complex systems. One example is the recombinase-based digitizer circuit, which has been used to improve weak or leaky biological signals. Here we present a workflow to quantitatively define digitizer performance and predict responses to different input signals. Using a combination of signal-to-noise ratio (SNR), area under a receiver operating characteristic curve (AUC), and fold change (FC), we evaluate three small-molecule inducible digitizer designs demonstrating FC up to 508x and SNR up to 3.77 dB. To study their behavior further and improve modularity, we develop a mixed phenotypic/mechanistic model capable of predicting digitizer configurations that amplify a synNotch cell-to-cell communication signal (Δ SNR up to 2.8 dB). We hope the metrics and modeling approaches here will facilitate incorporation of these digitizers into other systems while providing an improved workflow for gene circuit characterization.

6 citations

Proceedings ArticleDOI
22 May 2011
TL;DR: Under this measure, the best-performing imputation algorithm reconstructs masked sections by choosing the nearest neighbor to the surrounding observations within the song, which is consistent with the large amount of repetition found in pop music.
Abstract: Building models of the structure in musical signals raises the question of how to evaluate and compare different modeling approaches. One possibility is to use the model to impute deliberately-removed patches of missing data, then to compare the model's predictions with the part that was removed. We analyze a corpus of popular music audio represented as beat-synchronous chroma features, and compare imputation based on simple linear prediction to more complex models including nearest neighbor selection and shift-invariant probabilistic latent component analysis. Simple linear models perform best according to Euclidean distance, despite producing stationary results which are not musically meaningful. We therefore investigate alternate evaluation measures and observe that an entropy difference metric correlates better with our expectations for musically consistent reconstructions. Under this measure, the best-performing imputation algorithm reconstructs masked sections by choosing the nearest neighbor to the surrounding observations within the song. This result is consistent with the large amount of repetition found in pop music.

6 citations

Proceedings Article
01 Aug 2018
TL;DR: In this paper, the authors show that under mild assumptions, Signal Temporal Logic (STL) formulae admit a metric space and propose two metrics over this space based on the Pompeiu-Hausdorff distance and the symmetric difference measure.
Abstract: Signal Temporal Logic (STL) is a formal language for describing a broad range of real-valued, temporal properties in cyber-physical systems. While there has been extensive research on verification and control synthesis from STL requirements, there is no formal framework for comparing two STL formulae. In this paper, we show that under mild assumptions, STL formulae admit a metric space. We propose two metrics over this space based on i) the Pompeiu-Hausdorff distance and ii) the symmetric difference measure, and present algorithms to compute them. Alongside illustrative examples, we present applications of these metrics for two fundamental problems: a) design quality measures: to compare all the temporal behaviors of a designed system, such as a synthetic genetic circuit, with the "desired" specification, and b) loss functions: to quantify errors in Temporal Logic Inference (TLI) as a first step to establish formal performance guarantees of TLI algorithms.

6 citations

Posted ContentDOI
06 Dec 2019-bioRxiv
TL;DR: The effects of resource competition on engineered genetic systems in mammalian cells are quantified and a feedfoward controller is developed to make gene expression robust to changes in resource availability.
Abstract: A significant goal of synthetic biology is to develop genetic devices for accurate and robust control of gene expression. Lack of modularity, wherein a device output does not depend uniquely on its intended inputs but also on its context, leads to poorly predictable device behavior. One contributor to lack of modularity is competition for shared limited gene expression resources, which can induce 9coupling9 between otherwise independently-regulated genes. Here we quantify the effects of resource competition on engineered genetic systems in mammalian cells and develop a feedfoward controller to make gene expression robust to changes in resource availability. In addition to mitigating resource competition, our feedforward controller also enables adaptation to multiple log-orders of DNA copy number variation and is predictably tunable with upstream open reading frames. Our resource competition characterization along with the feedforward control device will be critical for achieving robust and accurate controlof gene expression

6 citations

Journal ArticleDOI
20 Jan 2021
TL;DR: By varying the number of highly adhesive and less adhesive cells in multicellular aggregates, this work finds the cell-type ratio and total cell count control pattern formation, with resulting structures maintained for several days.
Abstract: Summary Adhesion-mediated cell sorting has long been considered an organizing principle in developmental biology. While most computational models have emphasized the dynamics of segregation to fully sorted structures, cell sorting can also generate a plethora of transient, incompletely sorted states. The timescale of such states in experimental systems is unclear: if they are long-lived, they can be harnessed by development or engineered in synthetic tissues. Here, we use experiments and computational modeling to demonstrate how such structures can be systematically designed by quantitative control of cell composition. By varying the number of highly adhesive and less adhesive cells in multicellular aggregates, we find the cell-type ratio and total cell count control pattern formation, with resulting structures maintained for several days. Our work takes a step toward mapping the design space of self-assembling structures in development and provides guidance to the emerging field of shape engineering with synthetic biology.

6 citations


Cited by
More filters
28 Jul 2005
TL;DR: PfPMP1)与感染红细胞、树突状组胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作�ly.
Abstract: 抗原变异可使得多种致病微生物易于逃避宿主免疫应答。表达在感染红细胞表面的恶性疟原虫红细胞表面蛋白1(PfPMP1)与感染红细胞、内皮细胞、树突状细胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作用。每个单倍体基因组var基因家族编码约60种成员,通过启动转录不同的var基因变异体为抗原变异提供了分子基础。

18,940 citations

Proceedings ArticleDOI
13 Aug 2016
TL;DR: XGBoost as discussed by the authors proposes a sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning to achieve state-of-the-art results on many machine learning challenges.
Abstract: Tree boosting is a highly effective and widely used machine learning method. In this paper, we describe a scalable end-to-end tree boosting system called XGBoost, which is used widely by data scientists to achieve state-of-the-art results on many machine learning challenges. We propose a novel sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning. More importantly, we provide insights on cache access patterns, data compression and sharding to build a scalable tree boosting system. By combining these insights, XGBoost scales beyond billions of examples using far fewer resources than existing systems.

14,872 citations

Journal ArticleDOI
01 Apr 1998
TL;DR: This paper provides an in-depth description of Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext and looks at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.
Abstract: In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/. To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.

14,696 citations

Proceedings Article
11 Nov 1999
TL;DR: This paper describes PageRank, a mathod for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them, and shows how to efficiently compute PageRank for large numbers of pages.
Abstract: The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a mathod for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation.

14,400 citations

Proceedings ArticleDOI
TL;DR: This paper proposes a novel sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning and provides insights on cache access patterns, data compression and sharding to build a scalable tree boosting system called XGBoost.
Abstract: Tree boosting is a highly effective and widely used machine learning method. In this paper, we describe a scalable end-to-end tree boosting system called XGBoost, which is used widely by data scientists to achieve state-of-the-art results on many machine learning challenges. We propose a novel sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning. More importantly, we provide insights on cache access patterns, data compression and sharding to build a scalable tree boosting system. By combining these insights, XGBoost scales beyond billions of examples using far fewer resources than existing systems.

13,333 citations