scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 2017"


Proceedings ArticleDOI
19 Jun 2017
TL;DR: Goldmann et al. as discussed by the authors constructed fine-grained one-way functions that can be computed in some fixed polynomial time but are hard on average for any algorithm that runs in slightly smaller time, assuming widely-conjectured worst-case hardness for problems from the study of finegrained complexity.
Abstract: We present functions that can be computed in some fixed polynomial time but are hard on average for any algorithm that runs in slightly smaller time, assuming widely-conjectured worst-case hardness for problems from the study of fine-grained complexity. Unconditional constructions of such functions are known from before (Goldmann et al., IPL '94), but these have been canonical functions that have not found further use, while our functions are closely related to well-studied problems and have considerable algebraic structure. Based on the average-case hardness and structural properties of our functions, we outline the construction of a Proof of Work scheme and discuss possible approaches to constructing fine-grained One-Way Functions. We also show how our reductions make conjectures regarding the worst-case hardness of the problems we reduce from (and consequently the Strong Exponential Time Hypothesis) heuristically falsifiable in a sense similar to that of (Naor, CRYPTO '03). We prove our hardness results in each case by showing fine-grained reductions from solving one of three problems - namely, Orthogonal Vectors (OV), 3SUM, and All-Pairs Shortest Paths (APSP) - in the worst case to computing our function correctly on a uniformly random input. The conjectured hardness of OV and 3SUM then gives us functions that require n2-o(1) time to compute on average, and that of APSP gives us a function that requires n3-o(1) time. Using the same techniques we also obtain a conditional average-case time hierarchy of functions.

50 citations


DOI
09 Jul 2017
TL;DR: The results suggest that it might worthwhile to focus on the average-case hardness of MKTP and MCSP when approaching the question of whether these problems are NP-hard.
Abstract: We prove various results on the complexity of MCSP (Minimum Circuit Size Problem) and the related MKTP (Minimum Kolmogorov Time-Bounded Complexity Problem):• We observe that under standard cryptographic assumptions, MCSP has a pseudorandom self-reduction. This is a new notion we define by relaxing the notion of a random self-reduction to allow queries to be pseudorandom rather than uniformly random. As a consequence we derive a weak form of a worst-case to average-case reduction for (a promise version of) MCSP. Our result also distinguishes MCSP from natural NP-complete problems, which are not known to have worst-case to average-case reductions. Indeed, it is known that strong forms of worst-case to average-case reductions for NP-complete problems collapse the Polynomial Hierarchy.• We prove the first non-trivial formula size lower bounds for MCSP by showing that MCSP requires nearly quadratic-size De Morgan formulas.• We show average-case superpolynomial size lower bounds for MKTP against AC0[p] for any prime p.• We show the hardness of MKTP on average under assumptions that have been used in much recent work, such as Feige's assumptions, Alekhnovich's assumption and the Planted Clique conjecture. In addition, MCSP is hard under Alekhnovich's assumption. Using a version of Feige's assumption against co-nondeterministic algorithms that has been conjectured recently, we provide evidence for the first time that MKTP is not in coNP. Our results suggest that it might worthwhile to focus on the average-case hardness of MKTP and MCSP when approaching the question of whether these problems are NP-hard.

50 citations


Journal ArticleDOI
TL;DR: The original contribution of the paper includes the introduction of several new entropy-deceiving networks and the empirical comparison of entropy and -complexity as fundamental quantities for constructing complexity measures for networks.
Abstract: One of the most popular methods of estimating the complexity of networks is to measure the entropy of network invariants, such as adjacency matrices or degree sequences. Unfortunately, entropy and all entropy-based information-theoretic measures have several vulnerabilities. These measures neither are independent of a particular representation of the network nor can capture the properties of the generative process, which produces the network. Instead, we advocate the use of the algorithmic entropy as the basis for complexity definition for networks. Algorithmic entropy (also known as Kolmogorov complexity or -complexity for short) evaluates the complexity of the description required for a lossless recreation of the network. This measure is not affected by a particular choice of network features and it does not depend on the method of network representation. We perform experiments on Shannon entropy and -complexity for gradually evolving networks. The results of these experiments point to -complexity as the more robust and reliable measure of network complexity. The original contribution of the paper includes the introduction of several new entropy-deceiving networks and the empirical comparison of entropy and -complexity as fundamental quantities for constructing complexity measures for networks.

47 citations


Journal ArticleDOI
TL;DR: In this paper, the authors give a reduction from clique to establish that sparse PCA is NP-hard and exclude a fully polynomial time approximation scheme (FPTAS) unless P = NP.

40 citations


Proceedings ArticleDOI
19 Jun 2017
TL;DR: It is shown that if a function f cannot be computed correctly on more than 1/2 + 2-k of the inputs by any formula of size at most s, then computing f exactly requires formula size at least Ω(k) · s, the first super-linear lower bound on the bipartite formula complexity of any explicit function.
Abstract: A de Morgan formula over Boolean variables x1,…,xn is a binary tree whose internal nodes are marked with AND or OR gates and whose leaves are marked with variables or their negation. We define the size of the formula as the number of leaves in it. Proving that some explicit function (in P or NP) requires a large formula is a central open question in computational complexity. While we believe that some explicit functions require exponential formula size, currently the best lower bound for an explicit function is the Ω(n3) lower bound for Andreev's function. A long line of work in quantum query complexity, culminating in the work of Reichardt [SODA, 2011], proved that for any formula of size s, there exists a polynomial of degree at most O(√s) that approximates the formula up to a small point-wise error. This is a classical theorem, arguing about polynomials and formulae, however the only known proof for it involves quantum algorithms. We apply Reichardt result to obtain the following: (1) We show how to trade average-case hardness in exchange for size. More precisely, we show that if a function f cannot be computed correctly on more than 1/2 + 2-k of the inputs by any formula of size at most s, then computing f exactly requires formula size at least Ω(k) · s. As an application, we improve the state of the art formula size lower bounds for explicit functions by a factor of Ω(logn). (2) We prove that the bipartite formula size of the Inner-Product function is Ω(n2). (A bipartite formula on Boolean variables x1,…,xn and y1, …, yn is a binary tree whose internal nodes are marked with AND or OR gates and whose leaves can compute any function of either the x or y variables.) We show that any bipartite formula for the Inner-Product modulo 2 function, namely IP(x,y) = Σi=1n xi yi (mod 2), must be of size Ω(n2), which is tight up to logarithmic factors. To the best of our knowledge, this is the first super-linear lower bound on the bipartite formula complexity of any explicit function.

27 citations


Journal ArticleDOI
TL;DR: This work studies output-sensitive algorithms and complexity for multiobjective combinatorial optimization problems and provides both practical examples of MOCO problems for which such an efficient algorithm exists as well as Problems for which no efficient algorithms exists under mild complexity theoretic assumptions.
Abstract: We study output-sensitive algorithms and complexity for multiobjective combinatorial optimization problems In this computational complexity framework, an algorithm for a general enumeration problem is regarded efficient if it is output-sensitive, ie, its running time is bounded by a polynomial in the input and the output size We provide both practical examples of MOCO problems for which such an efficient algorithm exists as well as problems for which no efficient algorithm exists under mild complexity theoretic assumptions

20 citations


Journal ArticleDOI
TL;DR: This paper introduces the use of Set-Membership concept, derived from the adaptive filter theory, into the training procedure of type-1 and singleton/non-singleton fuzzy logic systems, in order to reduce computational complexity and to increase convergence speed.
Abstract: This paper focuses on the classification of faults in an electromechanical switch machine, which is an equipment used for handling railroad switches. In this paper, we introduce the use of Set-Membership concept, derived from the adaptive filter theory, into the training procedure of type-1 and singleton/non-singleton fuzzy logic systems, in order to reduce computational complexity and to increase convergence speed. We also present different criteria for using along with Set-Membership. Furthermore, we discuss the usefulness of delta rule delta, local Lipschitz estimation, variable step size, and variable step size adaptive techniques to yield additional improvement in terms of computational complexity reduction and convergence speed. Based on data set provided by a Brazilian railway company, which covers the four possible faults in a switch machine, we present performance analysis in terms of classification ratio, convergence speed, and computational complexity reduction. The reported results show that the proposed models result in improved convergence speed, slightly higher classification ratio, and remarkable computation complexity reduction when we limit the number of epochs for training, which may be required due to real-time constraint or low computational resource availability.

18 citations


Journal ArticleDOI
TL;DR: Weak average-case analysis as mentioned in this paper is an attempt to achieve theoretical complexity results that are closer to practical experience than those resulting from traditional approaches, and has been used in other areas such as nonasymptotic random matrix theory and compressive sensing.

17 citations


Journal ArticleDOI
TL;DR: This paper studies the connection between the number of AND gates (multiplicative complexity) and the complexity of algebraic attacks, and model the encryption with multiple right-hand sides (MRHS) equations.
Abstract: Lightweight cipher designs try to minimize the implementation complexity of the cipher while maintaining some specified security level. Using only a small number of AND gates lowers the implementation costs, and enables easier protections against side-channel attacks. In our paper we study the connection between the number of AND gates (multiplicative complexity) and the complexity of algebraic attacks. We model the encryption with multiple right-hand sides (MRHS) equations. The resulting equation system is transformed into a syndrome decoding problem. The complexity of the decoding problem depends on the number of AND gates, and on the relative number of known output bits with respect to the number of unknown key bits. This allows us to apply results from coding theory, and to explicitly connect the complexity of the algebraic cryptanalysis to the multiplicative complexity of the cipher. This means that we can provide asymptotic upper bounds on the complexity of algebraic attacks on selected families of ciphers based on the hardness of the decoding problem.

11 citations


Proceedings ArticleDOI
19 Jun 2017
TL;DR: In this article, Rao and Sinha showed that the quantum communication complexity of the Symmetric k-ary Pointer Jumping function is polynomially equivalent to its classical information complexity.
Abstract: We exhibit a Boolean function for which the quantum communication complexity is exponentially larger than the classical information complexity. An exponential separation in the other direction was already known from the work of Kerenidis et. al. [SICOMP 44, pp. 1550-1572], hence our work implies that these two complexity measures are incomparable. As classical information complexity is an upper bound on quantum information complexity, which in turn is equal to amortized quantum communication complexity, our work implies that a tight direct sum result for distributional quantum communication complexity cannot hold. The function we use to present such a separation is the Symmetric k-ary Pointer Jumping function introduced by Rao and Sinha [ECCC TR15-057], whose classical communication complexity is exponentially larger than its classical information complexity. In this paper, we show that the quantum communication complexity of this function is polynomially equivalent to its classical communication complexity. The high-level idea behind our proof is arguably the simplest so far for such an exponential separation between information and communication, driven by a sequence of round-elimination arguments, allowing us to simplify further the approach of Rao and Sinha. As another application of the techniques that we develop, a simple proof for an optimal trade-off between Alice's and Bob's communication is given, even when allowing pre-shared entanglement, while computing the related Greater-Than function on n bits: say Bob communicates at most b bits, then Alice must send n/2O (b) bits to Bob. We also present a classical protocol achieving this bound.

10 citations


Journal ArticleDOI
TL;DR: The authors' single-shot bounds relate the communication complexity of simulating a protocol to tail bounds for information complexity density and obtain a strong converse and characterize the second-order asymptotic term in communication complexity for independent and identically distributed observation sequences.
Abstract: Two parties observing correlated random variables seek to run an interactive communication protocol. How many bits must they exchange to simulate the protocol, namely to produce a view with a joint distribution within a fixed statistical distance of the joint distribution of the input and the transcript of the original protocol? We present an information spectrum approach for this problem whereby the information complexity of the protocol is replaced by its information complexity density. Our single-shot bounds relate the communication complexity of simulating a protocol to tail bounds for information complexity density. As a consequence, we obtain a strong converse and characterize the second-order asymptotic term in communication complexity for independent and identically distributed observation sequences. Furthermore, we obtain a general formula for the rate of communication complexity, which applies to any sequence of observations and protocols. Connections with results from theoretical computer science and implications for the function computation problem are discussed.

Journal ArticleDOI
TL;DR: This work replaces the determinant in geometric complexity theory with the trace of a symbolic matrix power and proves that in this homogeneous formulation there are no orbit occurrence obstructions that prove even superlinear lower bounds on the complexity of the permanent.
Abstract: Valiant's famous determinant versus permanent problem is the flagship problem in algebraic complexity theory. Mulmuley and Sohoni (2001, 2008) [23] , [24] introduced geometric complexity theory, an approach to study this and related problems via algebraic geometry and representation theory. Their approach works by multiplying the permanent polynomial with a high power of a linear form (a process called padding) and then comparing the orbit closures of the determinant and the padded permanent. This padding was recently used heavily to show negative results for the method of shifted partial derivatives (Efremenko et al., 2016 [6] ) and for geometric complexity theory (Ikenmeyer and Panova, 2016 [17] and Burgisser et al., 2016 [3] ), in which occurrence obstructions were ruled out to be able to prove superpolynomial complexity lower bounds. Following a classical homogenization result of Nisan (1991) [25] we replace the determinant in geometric complexity theory with the trace of a symbolic matrix power. This gives an equivalent but much cleaner homogeneous formulation of geometric complexity theory in which the padding is removed. This radically changes the representation theoretic questions involved to prove complexity lower bounds. We prove that in this homogeneous formulation there are no orbit occurrence obstructions that prove even superlinear lower bounds on the complexity of the permanent. Interestingly—in contrast to the determinant—the trace of a symbolic matrix power is not uniquely determined by its stabilizer.

Journal ArticleDOI
TL;DR: A notion of size and complexity for strategies in sequential games is defined, which defines a notion of complexity for PCF functions, and the corresponding higher-order polynomial time complexity class contains BFF.

Journal ArticleDOI
TL;DR: A lattice-reduction (LR)-aided breadth-first tree searching algorithm for MIMO detection achieving near-optimal performance with very low complexity and the proposed algorithm’s higher efficiency in terms of the performance/complexity tradeoff than the existing LR-aided K-best detectors and LR- aided fixed-complexity sphere decoders is verified.
Abstract: We propose a lattice-reduction (LR)-aided breadth-first tree searching algorithm for MIMO detection achieving near-optimal performance with very low complexity. At each level of the tree in the search, only the paths whose accumulated metrics satisfy a particular restriction condition will be kept as the candidates. Furthermore, the number of child nodes expanded on each parent node, and the maximum number of candidates preserved at each level, are also restricted, respectively. All these measures ensure the proposed algorithm reaching a preset near-optimal performance and achieving very low average and maximum computational complexity. Simulation results verify the proposed algorithm’s higher efficiency in terms of the performance/complexity tradeoff than the existing LR-aided K-best detectors and LR-aided fixed-complexity sphere decoders.

Journal ArticleDOI
TL;DR: A Necessary Information Complexity notion is introduced to quantify the minimum amount of information needed for the existence of a Probabilistic Approximate equilibrium in statistical ensembles of games.
Abstract: In this work, we study Static and Dynamic Games on Large Networks of interacting agents, assuming that the players have some statistical description of the interaction graph, as well as some local information. Inspired by Statistical Physics, we consider statistical ensembles of games and define a Probabilistic Approximate equilibrium notion for such ensembles. A Necessary Information Complexity notion is introduced to quantify the minimum amount of information needed for the existence of a Probabilistic Approximate equilibrium. We then focus on some special classes of games for which it is possible to derive upper and/or lower bounds for the complexity. At first, static and dynamic games on random graphs are studied and their complexity is determined as a function of the graph connectivity. In the low complexity case, we compute Probabilistic Approximate equilibrium strategies. We then consider static games on lattices and derive upper and lower bounds for the complexity, using contraction mapping ideas. A LQ game on a large ring is also studied numerically. Using a reduction technique, approximate equilibrium strategies are computed and it turns out that the complexity is relatively low.

Journal ArticleDOI
TL;DR: A computational study concerning weighting order and disorder in LMC and SDL measures is presented, by using a binomial probability distribution as reference, showing the qualitative equivalence between them and how the weight changes complexity.

Journal ArticleDOI
TL;DR: The Novelli-Pak-Stoyanovskii algorithm is a sorting algorithm for Young tableaux of a fixed shape that was originally devised to give a bijective proof of the hook-length formula.
Abstract: The Novelli-Pak-Stoyanovskii algorithm is a sorting algorithm for Young tableaux of a fixed shape that was originally devised to give a bijective proof of the hook-length formula. We obtain new asymptotic results on the average case and worst case complexity of this algorithm as the underlying shape tends to a fixed limit curve. Furthermore, using the summation package Sigma we prove an exact formula for the average case complexity when the underlying shape consists of only two rows. We thereby answer questions posed by Krattenthaler and Muller.

Journal ArticleDOI
24 Feb 2017
TL;DR: This paper proposes a methodology based on system connections to calculate its complexity, modeled using the theory of Discrete Event Systems to solve the dining Chinese philosophers’ problem and the distribution center.
Abstract: This paper proposes a methodology based on system connections to calculate its complexity. Two study cases are proposed: the dining Chinese philosophers’ problem and the distribution center. Both studies are modeled using the theory of Discrete Event Systems and simulations in different contexts were performed in order to measure their complexities. The obtained results present i) the static complexity as a limiting factor for the dynamic complexity, ii) the lowest cost in terms of complexity for each unit of measure of the system performance and iii) the output sensitivity to the input parameters. The associated complexity and performance measures aggregate knowledge about the system.

Book ChapterDOI
01 Jan 2017
TL;DR: In computer science, the theory of computational complexity was developed to evaluate the hardness of problems, regarding the number of steps to obtain a solution, and the Turing machine model is not suitable for the algorithms using real numbers.
Abstract: In computer science, the theory of computational complexity was developed to evaluate the hardness of problems, regarding the number of steps to obtain a solution; for the basic results of that theory we refer to [147]. The model of computations, used in that theory, is either the Turing machine or any other equivalent model. The input and output of the Turing machine should be encoded as finite strings of bits. Such an encoding can be used to represent the objects of discrete nature, and the theory of computational complexity is favorable, e.g., to the investigation of problems of single-objective combinatorial optimization [150, 151]. However, the Turing machine model is not suitable for the algorithms using real numbers. Therefore alternative complexity theories have been developed for the investigation of problems of continuous nature. For the fundamentals of the complexity of real number algorithms we refer to [19, 218], and for the complexity of problems of mathematical programming to [143, 150, 151].

Posted Content
TL;DR: In this paper, the authors proposed a new iterative algorithm, called the K-sets+ algorithm, for clustering data points in a semi-metric space, where the distance measure does not necessarily satisfy the triangular inequality.
Abstract: In this paper, we first propose a new iterative algorithm, called the K-sets+ algorithm for clustering data points in a semi-metric space, where the distance measure does not necessarily satisfy the triangular inequality. We show that the K-sets+ algorithm converges in a finite number of iterations and it retains the same performance guarantee as the K-sets algorithm for clustering data points in a metric space. We then extend the applicability of the K-sets+ algorithm from data points in a semi-metric space to data points that only have a symmetric similarity measure. Such an extension leads to great reduction of computational complexity. In particular, for an n * n similarity matrix with m nonzero elements in the matrix, the computational complexity of the K-sets+ algorithm is O((Kn + m)I), where I is the number of iterations. The memory complexity to achieve that computational complexity is O(Kn + m). As such, both the computational complexity and the memory complexity are linear in n when the n * n similarity matrix is sparse, i.e., m = O(n). We also conduct various experiments to show the effectiveness of the K-sets+ algorithm by using a synthetic dataset from the stochastic block model and a real network from the WonderNetwork website.

Journal ArticleDOI
TL;DR: In this paper, the complexity of inference with Relational Bayesian Networks as parameterized by their probability formulas was studied and it was shown that inference is pp-complete, displaying the same complexity as standard Bayesian networks (this is so even when the domain is succinctly specified in binary notation).

01 Jan 2017
TL;DR: A public-coin interactive proof system for S of round complexity O(r(n)/ logn) is shown, where logn is the number of instances of length n, and r is the randomness complexity.
Abstract: Consider an interactive proof system for some set S that has randomness complexity r(n) for instances of length n, and arbitrary round complexity. We show a public-coin interactive proof system for S of round complexity O(r(n)/ logn). Furthermore, the randomness complexity is preserved up to a constant factor, and the resulting interactive proof system has perfect completeness. 2012 ACM Subject Classification Theory of computation → Interactive proof systems

Journal ArticleDOI
TL;DR: This paper proposes a criterion based on which, using methods of mathematical programming, an algorithm is constructed for successively reducing the limit complexity of a controller.
Abstract: This paper proposes a criterion based on which, using methods of mathematical programming, an algorithm is constructed for successively reducing the limit complexity of a controller.

Journal ArticleDOI
TL;DR: This paper proves that the Mitchell–Mount–Papadimitriou (MMP) and Chen–Han algorithm have Θ ( n 1.5 ) space complexity on a completely regular triangulation (i.e., all triangles are equilateral).


Dissertation
29 Aug 2017
TL;DR: This work strengthens the query complexity gap for the Glued Tree label detection problem by improving a classical lower bound technique; and proves such a lower bound is nearly tight by giving a classical query algorithm whose query complexity matches the lower bound, up to a polylog factor.
Abstract: Query complexity is one of the several notions of complexity defined to measure the cost of algorithms. It plays an important role in demonstrating the quantum advantage, that quantum computing is faster than classical computation in solving some problems. Kempe showed that a discrete time quantum walk on hypercube hits the antipodal point with exponentially fewer queries than a simple random walk [K05]. Childs et al. showed that a continuous time quantum walk on “Glued Trees” detects the label of a special vertex with exponentially fewer queries than any classical algorithm [CCD03], and the result translates to discrete time quantum walk by an efficient simulation. Building on these works, we examine the query complexity of variations of the hypercube and Glued Tree problems. We first show the gap between quantum and classical query algorithms for a modified hypercube problem is at most polynomial. We then strengthen the query complexity gap for the Glued Tree label detection problem by improving a classical lower bound technique; and we prove such a lower bound is nearly tight by giving a classical query algorithm whose query complexity matches the lower bound, up to a polylog factor.

Journal ArticleDOI
04 Jan 2017
TL;DR: The time complexity of the Binary Tree Roll algorithm is shown, both theoretically and empirically, to be linear in the best case and quadratic in the worst case, whereas its average case is shown to be dominantly linear for trees with a relatively small number of nodes and dominantly Quadratic otherwise.
Abstract: This paper presents the time complexity analysis of the Binary Tree Roll algorithm. The time complexity is analyzed theoretically and the results are then confirmed empirically. The theoretical analysis consists of finding recurrence relations for the time complexity, and solving them using various methods. The empirical analysis consists of exhaustively testing all trees with given numbers of nodes and counting the minimum and maximum steps necessary to complete the roll algorithm. The time complexity is shown, both theoretically and empirically, to be linear in the best case and quadratic in the worst case, whereas its average case is shown to be dominantly linear for trees with a relatively small number of nodes and dominantly quadratic otherwise.

Journal ArticleDOI
TL;DR: The index for time complexity of the best local model checking algorithm based on the propositional μ-calculus is reduced from d to d/2, which is more efficient than algorithms of previous research.
Abstract: The propositionalμ-calculus can be divided into two categories, global model checking algorithm and local model checking algorithm. Both of them aim at reducing time complexity and space complexity effectively. This paper analyzes the computing process of alternating fixpoint nested in detail and designs an efficient local model checking algorithm based on the propositional μ-calculus by a group of partial ordered relation, and its time complexity is O(d 2 (dn) d/2+2 ) (d is the depth of fixpoint nesting, is the maximum of number of nodes), space complexity is O(d(dn) d/2 ). As far as we know, up till now, the best local model checking algorithm whose index of time complexity is d. In this paper, the index for time complexity of this algorithm is reduced from d to d/2. It is more efficient than algorithms of previous research.


Proceedings ArticleDOI
01 Jan 2017
TL;DR: This paper proposes a novel symbol-to-bit demapping algorithm for Gray-labeled phase shift keying (PSK) constellations and shows that the proposed algorithm achieves a more substantial complexity reduction than the former bench marker.
Abstract: This paper proposes a novel symbol-to-bit demapping algorithm for Gray-labeled phase shift keying (PSK) constellations. Unlike the Max-Log-MAP demapper, the proposed algorithm does not perform exhaustive search operations, but directly computes the soft information by exploiting the binary search and the symmetry of the Gray-labeled PSK constellations. Hence its complexity is remarkably reduced from the order of O(2m) of the Max-Log-MAP to O(M), where M denotes the number of bits per symbol. A pair of recent methods are used as bench markers in this paper. One of them reduces the complexity without any performance loss, while the other is a recursive method that reduces the complexity by approximating the original bit metric. It is shown that the proposed algorithm achieves a more substantial complexity reduction than the former bench marker. The proposed algorithm achieves a similar complexity reduction to the latter bench marker and yet, it does not suffer from any performance degradation.