scispace - formally typeset
Search or ask a question
Author

Omer Tamuz

Bio: Omer Tamuz is an academic researcher from California Institute of Technology. The author has contributed to research in topics: Probability measure & Planet. The author has an hindex of 31, co-authored 152 publications receiving 3555 citations. Previous affiliations of Omer Tamuz include University of Geneva & Microsoft.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a lower-rank approximation of matrices is proposed to remove systematic effects in a large set of lightcurves obtained by a photometric survey, such as atmospheric extinction, detector efficiency, or PSF changes over the detector.
Abstract: We suggest a new algorithm to remove systematic effects in a large set of lightcurves obtained by a photometric survey. The algorithm can remove systematic effects, like the ones associated with atmospheric extinction, detector efficiency, or PSF changes over the detector. The algorithm works without any prior knowledge of the effects, as long as they linearly appear in many stars of the sample. The approach, which was originally developed to remove atmospheric extinction effects, is based on a lower rank approximation of matrices, an approach which was already suggested and used in chemometrics, for example. The proposed algorithm is specially useful in cases where the uncertainties of the measurements are unequal. For equal uncertainties the algorithm reduces to the Principal Components Analysis (PCA) algorithm. We present a simulation to demonstrate the effectiveness of the proposed algorithm and point out its potential, in search for transit candidates in particular.

511 citations

Journal ArticleDOI
TL;DR: In this article, a lower-rank approximation of matrices is proposed to remove systematic effects in a large set of lightcurves obtained by a photometric survey, such as atmospheric extinction, detector efficiency, or PSF changes over the detector.
Abstract: We suggest a new algorithm to remove systematic effects in a large set of lightcurves obtained by a photometric survey. The algorithm can remove systematic effects, like the ones associated with atmospheric extinction, detector efficiency, or PSF changes over the detector. The algorithm works without any prior knowledge of the effects, as long as they linearly appear in many stars of the sample. The approach, which was originally developed to remove atmospheric extinction effects, is based on a lower rank approximation of matrices, an approach which was already suggested and used in chemometrics, for example. The proposed algorithm is specially useful in cases where the uncertainties of the measurements are unequal. For equal uncertainties the algorithm reduces to the Principal Components Analysis (PCA) algorithm. We present a simulation to demonstrate the effectiveness of the proposed algorithm and point out its potential, in search for transit candidates in particular.

404 citations

Posted Content
TL;DR: An algorithm that, given n objects, learns a similarity matrix over all n2 pairs, from crowdsourced data alone is introduced, and SVMs reveal that the crowd kernel captures prominent and subtle features across a number of domains.
Abstract: We introduce an algorithm that, given n objects, learns a similarity matrix over all n^2 pairs, from crowdsourced data alone. The algorithm samples responses to adaptively chosen triplet-based relative-similarity queries. Each query has the form "is object 'a' more similar to 'b' or to 'c'?" and is chosen to be maximally informative given the preceding responses. The output is an embedding of the objects into Euclidean space (like MDS); we refer to this as the "crowd kernel." SVMs reveal that the crowd kernel captures prominent and subtle features across a number of domains, such as "is striped" among neckties and "vowel vs. consonant" among letters.

260 citations

Journal ArticleDOI
TL;DR: A family of examples in which interaction prevents efficient aggregation of information, and a condition on the social network which ensures that aggregation occurs, is constructed, which shows that if the initial population is sufficiently biased towards a particular alternative then that alternative will eventually become the unanimous preference of the entire population.
Abstract: Consider $$n$$ n individuals who, by popular vote, choose among $$q \ge 2$$ q ? 2 alternatives, one of which is "better" than the others. Assume that each individual votes independently at random, and that the probability of voting for the better alternative is larger than the probability of voting for any other. It follows from the law of large numbers that a plurality vote among the $$n$$ n individuals would result in the correct outcome, with probability approaching one exponentially quickly as $$n \rightarrow \infty $$ n ? ? . Our interest in this article is in a variant of the process above where, after forming their initial opinions, the voters update their decisions based on some interaction with their neighbors in a social network. Our main example is "majority dynamics", in which each voter adopts the most popular opinion among its friends. The interaction repeats for some number of rounds and is then followed by a population-wide plurality vote. The question we tackle is that of "efficient aggregation of information": in which cases is the better alternative chosen with probability approaching one as $$n \rightarrow \infty $$ n ? ? ? Conversely, for which sequences of growing graphs does aggregation fail, so that the wrong alternative gets chosen with probability bounded away from zero? We construct a family of examples in which interaction prevents efficient aggregation of information, and give a condition on the social network which ensures that aggregation occurs. For the case of majority dynamics we also investigate the question of unanimity in the limit. In particular, if the voters' social network is an expander graph, we show that if the initial population is sufficiently biased towards a particular alternative then that alternative will eventually become the unanimous preference of the entire population.

145 citations

Proceedings Article
16 Jun 2013
TL;DR: The authors use machine learning to learn weights that relate textual features describing the provided input-output examples to plausible sub-components of a program, improving search and ranking on a variety of text processing tasks found on help forums.
Abstract: Learning programs is a timely and interesting challenge. In Programming by Example (PBE), a system attempts to infer a program from input and output examples alone, by searching for a composition of some set of base functions. We show how machine learning can be used to speed up this seemingly hopeless search problem, by learning weights that relate textual features describing the provided input-output examples to plausible sub-components of a program. This generic learning framework lets us address problems beyond the scope of earlier PBE systems. Experiments on a prototype implementation show that learning improves search and ranking on a variety of text processing tasks found on help forums.

141 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: A novel paradigm for evaluating image descriptions that uses human consensus is proposed and a new automated metric that captures human judgment of consensus better than existing metrics across sentences generated by various sources is evaluated.
Abstract: Automatically describing an image with a sentence is a long-standing challenge in computer vision and natural language processing. Due to recent progress in object detection, attribute classification, action recognition, etc., there is renewed interest in this area. However, evaluating the quality of descriptions has proven to be challenging. We propose a novel paradigm for evaluating image descriptions that uses human consensus. This paradigm consists of three main parts: a new triplet-based method of collecting human annotations to measure consensus, a new automated metric that captures consensus, and two new datasets: PASCAL-50S and ABSTRACT-50S that contain 50 sentences describing each image. Our simple metric captures human judgment of consensus better than existing metrics across sentences generated by various sources. We also evaluate five state-of-the-art image description approaches using this new protocol and provide a benchmark for future comparisons. A version of CIDEr named CIDEr-D is available as a part of MS COCO evaluation server to enable systematic evaluation and benchmarking.

3,504 citations

Posted Content
TL;DR: The Arrow-Pratt theory of risk aversion was shown to be isomorphic to the theory of optimal choice under risk in this paper, making possible the application of a large body of knowledge about risk aversion to precautionary saving.
Abstract: The theory of precautionary saving is shown in this paper to be isomorphic to the Arrow-Pratt theory of risk aversion, making possible the application of a large body of knowledge about risk aversion to precautionary saving, and more generally, to the theory of optimal choice under risk In particular, a measure of the strength of precautionary saving motive analogous to the Arrow-Pratt measure of risk aversion is used to establish a number of new propositions about precautionary saving, and to give a new interpretation of the Oreze-Modigliani substitution effect

1,944 citations

Book ChapterDOI
12 Oct 2015
TL;DR: This paper proposes the triplet network model, which aims to learn useful representations by distance comparisons, and demonstrates using various datasets that this model learns a better representation than that of its immediate competitor, the Siamese network.
Abstract: Deep learning has proven itself as a successful set of models for learning useful semantic representations of data. These, however, are mostly implicitly learned as part of a classification task. In this paper we propose the triplet network model, which aims to learn useful representations by distance comparisons. A similar model was defined by Wang et al. (2014), tailor made for learning a ranking for image information retrieval. Here we demonstrate using various datasets that our model learns a better representation than that of its immediate competitor, the Siamese network. We also discuss future possible usage as a framework for unsupervised learning.

1,635 citations

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the effects of cosmic perturbations from a distant companion star (Kozai oscillations) and tidal friction on the distribution of orbital elements produced by this process.
Abstract: At least two arguments suggest that the orbits of a large fraction of binary stars and extrasolar planets shrank by 1-2 orders of magnitude after formation: (1) the physical radius of a star shrinks by a large factor from birth to the main sequence, yet many main-sequence stars have companions orbiting only a few stellar radii away, and (2) in current theories of planet formation, the region within ~0.1 AU of a protostar is too hot and rarefied for a Jupiter-mass planet to form, yet many hot Jupiters are observed at such distances. We investigate orbital shrinkage by the combined effects of secular perturbations from a distant companion star (Kozai oscillations) and tidal friction. We integrate the relevant equations of motion to predict the distribution of orbital elements produced by this process. Binary stars with orbital periods of 0.1-10 days, with a median of ~2 days, are produced from binaries with much longer periods (10 to ~105 days), consistent with observations indicating that most or all short-period binaries have distant companions (tertiaries). We also make two new testable predictions: (1) For periods between 3 and 10 days, the distribution of the mutual inclination between the inner binary and the tertiary orbit should peak strongly near 40? and 140?. (2) Extrasolar planets whose host stars have a distant binary companion may also undergo this process, in which case the orbit of the resulting hot Jupiter will typically be misaligned with the equator of its host star.

1,434 citations