scispace - formally typeset
Search or ask a question
Topic

Open problem

About: Open problem is a research topic. Over the lifetime, 3799 publications have been published within this topic receiving 54215 citations. The topic is also known as: open question & unsolved problem.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, it was shown that every positive regular solution u(x) is radially symmetric and monotone about some point and therefore assumes the form with constant c = c(n, α) and for some t > 0 and x0 ϵ ℝn.
Abstract: Let n be a positive integer and let 0 < α < n. Consider the integral equation We prove that every positive regular solution u(x) is radially symmetric and monotone about some point and therefore assumes the form with some constant c = c(n, α) and for some t > 0 and x0 ϵ ℝn. This solves an open problem posed by Lieb 12. The technique we use is the method of moving planes in an integral form, which is quite different from those for differential equations. From the point of view of general methodology, this is another interesting part of the paper. Moreover, we show that the family of well-known semilinear partial differential equations is equivalent to our integral equation (0.1), and we thus classify all the solutions of the PDEs. © 2005 Wiley Periodicals, Inc.

781 citations

Proceedings Article
Shipra Agrawal1, Navin Goyal1
16 Jun 2013
TL;DR: In this article, a generalization of Thompson sampling is proposed for the stochastic contextual multi-armed bandit problem with linear payoff functions, where the contexts are provided by an adaptive adversary, and a high probability regret bound of O(d2/e√T1+e) is shown.
Abstract: Thompson Sampling is one of the oldest heuristics for multi-armed bandit problems. It is a randomized algorithm based on Bayesian ideas, and has recently generated significant interest after several studies demonstrated it to have better empirical performance compared to the state-of-the-art methods. However, many questions regarding its theoretical performance remained open. In this paper, we design and analyze a generalization of Thompson Sampling algorithm for the stochastic contextual multi-armed bandit problem with linear payoff functions, when the contexts are provided by an adaptive adversary. This is among the most important and widely studied version of the contextual bandits problem. We prove a high probability regret bound of O(d2/e√T1+e) in time T for any 0 < e < 1, where d is the dimension of each context vector and e is a parameter used by the algorithm. Our results provide the first theoretical guarantees for the contextual version of Thompson Sampling, and are close to the lower bound of Ω(d√T) for this problem. This essentially solves a COLT open problem of Chapelle and Li [COLT 2012].

668 citations

Journal ArticleDOI
TL;DR: The pseudolikelihood method, applied to 21-state Potts models describing the statistical properties of families of evolutionarily related proteins, significantly outperforms existing approaches to the direct-coupling analysis, the latter being based on standard mean-field techniques.
Abstract: Spatially proximate amino acids in a protein tend to coevolve. A protein's three-dimensional (3D) structure hence leaves an echo of correlations in the evolutionary record. Reverse engineering 3D structures from such correlations is an open problem in structural biology, pursued with increasing vigor as more and more protein sequences continue to fill the data banks. Within this task lies a statistical inference problem, rooted in the following: correlation between two sites in a protein sequence can arise from firsthand interaction but can also be network-propagated via intermediate sites; observed correlation is not enough to guarantee proximity. To separate direct from indirect interactions is an instance of the general problem of inverse statistical mechanics, where the task is to learn model parameters (fields, couplings) from observables (magnetizations, correlations, samples) in large systems. In the context of protein sequences, the approach has been referred to as direct-coupling analysis. Here we show that the pseudolikelihood method, applied to 21-state Potts models describing the statistical properties of families of evolutionarily related proteins, significantly outperforms existing approaches to the direct-coupling analysis, the latter being based on standard mean-field techniques. This improved performance also relies on a modified score for the coupling strength. The results are verified using known crystal structures of specific sequence instances of various protein families. Code implementing the new method can be found at http://plmdca.csc.kth.se/.

637 citations

Book ChapterDOI
30 May 2011
TL;DR: This survey describes the most important constructions of secret-sharing schemes and explains the connections between secret- sharing schemes and monotone formulae and monOTone span programs, and presents the known lower bounds on the share size.
Abstract: A secret-sharing scheme is a method by which a dealer distributes shares to parties such that only authorized subsets of parties can reconstruct the secret. Secret-sharing schemes are an important tool in cryptography and they are used as a building box in many secure protocols, e.g., general protocol for multiparty computation, Byzantine agreement, threshold cryptography, access control, attribute-based encryption, and generalized oblivious transfer. In this survey, we describe the most important constructions of secret-sharing schemes; in particular, we explain the connections between secret-sharing schemes and monotone formulae and monotone span programs. We then discuss the main problem with known secret-sharing schemes - the large share size, which is exponential in the number of parties. We conjecture that this is unavoidable. We present the known lower bounds on the share size. These lower bounds are fairly weak and there is a big gap between the lower and upper bounds. For linear secret-sharing schemes, which is a class of schemes based on linear algebra that contains most known schemes, super-polynomial lower bounds on the share size are known. We describe the proofs of these lower bounds. We also present two results connecting secret-sharing schemes for a Hamiltonian access structure to the NP vs. coNP problem and to a major open problem in cryptography - constructing oblivious-transfer protocols from one-way functions.

618 citations

Posted Content
TL;DR: In this article, the squared loss function of deep linear neural networks with any depth and any widths is shown to be non-convex and nonconcave, every local minimum is a global minimum, every critical point that is not a global minima is a saddle point, and there exist "bad" saddle points (where the Hessian has no negative eigenvalue) for deep networks with more than three layers.
Abstract: In this paper, we prove a conjecture published in 1989 and also partially address an open problem announced at the Conference on Learning Theory (COLT) 2015. With no unrealistic assumption, we first prove the following statements for the squared loss function of deep linear neural networks with any depth and any widths: 1) the function is non-convex and non-concave, 2) every local minimum is a global minimum, 3) every critical point that is not a global minimum is a saddle point, and 4) there exist "bad" saddle points (where the Hessian has no negative eigenvalue) for the deeper networks (with more than three layers), whereas there is no bad saddle point for the shallow networks (with three layers). Moreover, for deep nonlinear neural networks, we prove the same four statements via a reduction to a deep linear model under the independence assumption adopted from recent work. As a result, we present an instance, for which we can answer the following question: how difficult is it to directly train a deep model in theory? It is more difficult than the classical machine learning models (because of the non-convexity), but not too difficult (because of the nonexistence of poor local minima). Furthermore, the mathematically proven existence of bad saddle points for deeper models would suggest a possible open problem. We note that even though we have advanced the theoretical foundations of deep learning and non-convex optimization, there is still a gap between theory and practice.

609 citations


Network Information
Related Topics (5)
Upper and lower bounds
56.9K papers, 1.1M citations
92% related
Time complexity
36K papers, 879.5K citations
92% related
Bounded function
77.2K papers, 1.3M citations
92% related
Polynomial
52.6K papers, 853.1K citations
91% related
Markov chain
51.9K papers, 1.3M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20224
2021230
2020253
2019247
2018220
2017188