scispace - formally typeset
Search or ask a question
Author

Oya Ekin

Other affiliations: Rutgers University
Bio: Oya Ekin is an academic researcher from Bilkent University. The author has contributed to research in topics: Boolean function & Boolean expression. The author has an hindex of 7, co-authored 10 publications receiving 215 citations. Previous affiliations of Oya Ekin include Rutgers University.

Papers
More filters
Journal ArticleDOI
TL;DR: It is shown that a class of Boolean functions can be described by an appropriate set of such first-order sentences if and only if it is closed under permutation of variables.

79 citations

Journal ArticleDOI
TL;DR: This paper investigates the complexity of problems for a broad class of DNF properties with respect to the fixation of variables or the deletion of terms in a DNF.

37 citations

Journal ArticleDOI
TL;DR: A one-to-one correspondence is given between submodular functions and partial preorders (reflexive and transitive binary relations), and in particular between the nondegenerate acyclic submodULAR functions and the partially ordered sets.

24 citations

Journal ArticleDOI
TL;DR: The relationships between the Boolean hypercube properties and the DNF representations of the associated Boolean functions are studied.

21 citations

Journal ArticleDOI
TL;DR: The problem of uniqueness is studied, and a polynomial algorithm is provided for checking whether all $k$-convex extensions agree in a point outside the given data set, which is doubly exponential for small data sets and PAC-learnable for large.

21 citations


Cited by
More filters
Proceedings ArticleDOI
06 Jun 2011
TL;DR: This paper considers PAC-style learning of submodular functions in a distributional setting and uses lossless expanders to construct a new family of matroids which can take wildly varying rank values on superpolynomially many sets; no such construction was previously known.
Abstract: There has been much interest in the machine learning and algorithmic game theory communities on understanding and using submodular functions. Despite this substantial interest, little is known about their learnability from data. Motivated by applications, such as pricing goods in economics, this paper considers PAC-style learning of submodular functions in a distributional setting. A problem instance consists of a distribution on {0,1}n and a real-valued function on {0,1}n that is non-negative, monotone, and submodular. We are given poly(n) samples from this distribution, along with the values of the function at those sample points. The task is to approximate the value of the function to within a multiplicative factor at subsequent sample points drawn from the same distribution, with sufficiently high probability. We develop the first theoretical analysis of this problem, proving a number of important and nearly tight results. For instance, if the underlying distribution is a product distribution then we give a learning algorithm that achieves a constant-factor approximation (under some assumptions). However, for general distributions we provide a surprising Omega(n1/3) lower bound based on a new interesting class of matroids and we also show a O(n1/2) upper bound.Our work combines central issues in optimization (submodular functions and matroids) with central topics in learning (distributional learning and PAC-style analyses) and with central concepts in pseudo-randomness (lossless expander graphs). Our analysis involves a twist on the usual learning theory models and uncovers some interesting structural and extremal properties of submodular functions, which we suspect are likely to be useful in other contexts. In particular, to prove our general lower bound, we use lossless expanders to construct a new family of matroids which can take wildly varying rank values on superpolynomially many sets; no such construction was previously known. This construction shows unexpected extremal properties of submodular functions.

114 citations

Journal ArticleDOI
TL;DR: Geometric proximity graphs such as Voronoi diagrams and their many relatives provide elegant solutions to instance-based learning and data mining problems, as well as other related datamining problems such as outlier detection.
Abstract: In the typical nonparametric approach to classification in instance-based learning and data mining, random data (the training set of patterns) are collected and used to design a decision rule (classifier). One of the most well known such rules is the k-nearest-neighbor decision rule (also known as lazy learning) in which an unknown pattern is classified into the majority class among its k nearest neighbors in the training set. Several questions related to this rule have received considerable attention over the years. Such questions include the following. How can the storage of the training set be reduced without degrading the performance of the decision rule? How should the reduced training set be selected to represent the different classes? How large should k be? How should the value of k be chosen? Should all k neighbors be equally weighted when used to decide the class of an unknown pattern? If not, how should the weights be chosen? Should all the features (attributes) we weighted equally and if not how should the feature weights be chosen? What distance metric should be used? How can the rule be made robust to overlapping classes or noise present in the training data? How can the rule be made invariant to scaling of the measurements? How can the nearest neighbors of a new point be computed efficiently? What is the smallest neural network that can implement nearest neighbor decision rules? Geometric proximity graphs such as Voronoi diagrams and their many relatives provide elegant solutions to these problems, as well as other related data mining problems such as outlier detection. After a non-exhaustive review of some of the classical canonical approaches to these problems, the methods that use proximity graphs are discussed, some new observations are made, and open problems are listed.

96 citations

Journal ArticleDOI
TL;DR: This paper model various such suitability criteria as partial preorders defined on the set of patterns, and introduces three such preferences, and describes patterns which are Pareto-optimal with respect to any one of them, or to certain combinations of them.

92 citations

Posted Content
12 Aug 2010
TL;DR: In this paper, the authors study submodular functions from a learning theoretic angle and uncover several structural results revealing ways in which Submodular Functions can be both surprisingly structured and surprisingly unstructured.
Abstract: Submodular functions are discrete functions that model laws of diminishing returns and enjoy numerous algorithmic applications. They have been used in many areas, including combinatorial optimization, machine learning, and economics. In this work we study submodular functions from a learning theoretic angle. We provide algorithms for learning submodular functions, as well as lower bounds on their learnability. In doing so, we uncover several novel structural results revealing ways in which submodular functions can be both surprisingly structured and surprisingly unstructured. We provide several concrete implications of our work in other domains including algorithmic game theory and combinatorial optimization. At a technical level, this research combines ideas from many areas, including learning theory (distributional learning and PAC-style analyses), combinatorics and optimization (matroids and submodular functions), and pseudorandomness (lossless expander graphs).

86 citations