scispace - formally typeset
Search or ask a question

Showing papers on "Constant (mathematics) published in 2016"


Proceedings Article
06 Jun 2016
TL;DR: In this article, it was shown that a simple (approximately radial) function on R d, expressible by a small 3-layer feedforward neural networks, which cannot be approximated by any 2-layer network, unless its width is exponential in the dimension.
Abstract: We show that there is a simple (approximately radial) function on R d , expressible by a small 3-layer feedforward neural networks, which cannot be approximated by any 2-layer network, to more than a certain constant accuracy, unless its width is exponential in the dimension. The result holds for virtually all known activation functions, including rectified linear units, sigmoids and thresholds, and formally demonstrates that depth ‐ even if increased by 1 ‐ can be exponentially more valuable than width for standard feedforward neural networks. Moreover, compared to related results in the context of Boolean functions, our result requires fewer assumptions, and the proof techniques and construction are very different.

490 citations


Journal ArticleDOI
TL;DR: It is shown that the relative standard deviation associated with the FPT of an optimally restarted process, i.e., one that is restarted at a constant (nonzero) rate which brings the mean FPT to a minimum, is always unity.
Abstract: Stochastic restart may drastically reduce the expected run time of a computer algorithm, expedite the completion of a complex search process, or increase the turnover rate of an enzymatic reaction. These diverse first-passage-time (FPT) processes seem to have very little in common but it is actually quite the other way around. Here we show that the relative standard deviation associated with the FPT of an optimally restarted process, i.e., one that is restarted at a constant (nonzero) rate which brings the mean FPT to a minimum, is always unity. We interpret, further generalize, and discuss this finding and the implications arising from it.

228 citations


Journal ArticleDOI
TL;DR: This work investigates the uniqueness of solutions for a class of nonlinear boundary value problems for fractional differential equations with Lipschitz constant related to the first eigenvalues corresponding to the relevant operators.

185 citations



Journal ArticleDOI
18 Aug 2016
TL;DR: In this article, the authors study the evolution of the class structure of individuals' changing experience of mobility, as expressed in absolute rates, and relative mobility rates, comparing the chances of individuals of different class origins arriving at different class destinations.
Abstract: The class structure provides an important context for the study of social mobility. The evolution of the class structure is the all-important factor determining individuals’ changing experience of mobility, as expressed in absolute rates. The total mobility rate shows long-term stability; but, because of structural change, trends of rising upward and falling downward mobility in the mid-20th century are now being reversed. Relative mobility rates, comparing the chances of individuals of different class origins arriving at different class destinations, also show long-term stability. All this is evident over a period of more or less continuous educational expansion and reform—thus calling into question the belief that educational policy is key to promoting mobility. Education is best considered as a ‘positional’ good; and the motivation, and capacity, of parents in more advantaged class positions to help their children maintain their competitive edge in the educational system, and in turn in labour markets, underlies the resistance to change that the mobility regime displays.

127 citations


Journal ArticleDOI
TL;DR: It is proved that the asymptotic stability of positive fractional-order systems is not sensitive to the magnitude of delays and it is shown that the L∞-gain of a positive fractiona-order system is independent of the magnitudeof delays and fully determined by the system matrices.
Abstract: This paper addresses the stability and $L_{\infty}$ -gain analysis problem for continuous-time positive fractional-order delay systems with incommensurate orders between zero and one. A necessary and sufficient condition is firstly given to characterize the positivity of continuous-time fractional-order systems with bounded time-varying delays. Moreover, by exploiting the monotonic and asymptotic property of the constant delay system by virtue of the positivity, and comparing the trajectory of the time-varying delay system with that of the constant delay system, it is proved that the asymptotic stability of positive fractional-order systems is not sensitive to the magnitude of delays. In addition, it is shown that the $L_{\infty}$ -gain of a positive fractional-order system is independent of the magnitude of delays and fully determined by the system matrices. Finally, a numerical example is given to show the validity of the theoretical results.

115 citations


Proceedings ArticleDOI
01 Oct 2016
TL;DR: The results improve the best known running times from quasi-polynomial to polynomial for several problems, including decomposing random overcomplete 3-tensors and learning overcomplete dictionaries with constant relative sparsity.
Abstract: We give new algorithms based on the sum-of-squares method for tensor decomposition. Our results improve the best known running times from quasi-polynomial to polynomial for several problems, including decomposing random overcomplete 3-tensors and learning overcomplete dictionaries with constant relative sparsity. We also give the first robust analysis for decomposing overcomplete 4-tensors in the smoothed analysis model. A key ingredient of our analysis is to establish small spectral gaps in moment matrices derived from solutions to sum-of-squares relaxations. To enable this analysis we augment sum-of-squaresrelaxations with spectral analogs of maximum entropy constraints.

111 citations


Journal ArticleDOI
TL;DR: Weitz et al. as mentioned in this paper showed that unless RP = NP, there is no FPTAS for the partition function on graphs of maximum degree Δ when the inverse temperature lies in the non-uniqueness region of the infinite tree.
Abstract: Recent inapproximability results of Sly (2010), together with an approximation algorithm presented by Weitz (2006), establish a beautiful picture of the computational complexity of approximating the partition function of the hard-core model. Let λ c ( ) denote the critical activity for the hard-model on the infinite Δ-regular tree. Weitz presented an FPTAS for the partition function when λ 0 such that (unless RP = NP) there is no FPRAS for approximating the partition function on graphs of maximum degree Δ for activities λ satisfying λ c ( ) < λ < λ c ( ) + eΔ. We prove that a similar phenomenon holds for the antiferromagnetic Ising model. Sinclair, Srivastava and Thurley (2014) extended Weitz's approach to the antiferromagnetic Ising model, yielding an FPTAS for the partition function for all graphs of constant maximum degree Δ when the parameters of the model lie in the uniqueness region of the infinite Δ-regular tree. We prove the complementary result for the antiferromagnetic Ising model without external field, namely, that unless RP = NP, for all Δ ⩾ 3, there is no FPRAS for approximating the partition function on graphs of maximum degree Δ when the inverse temperature lies in the non-uniqueness region of the infinite tree . Our proof works by relating certain second moment calculations for random Δ-regular bipartite graphs to the tree recursions used to establish the critical points on the infinite tree.

103 citations


Proceedings ArticleDOI
10 Jan 2016
TL;DR: In the noiseless case with the number of defective items k scaling with the total number of items p as O(pθ) (θ ∈ (0, 1), it is shown that the probability of reconstruction error tends to one when n ≤ k log2 p/k (1 + o(1)), but vanishes when n ≥ c(θ), thus providing an exact threshold on the required number measurements.
Abstract: The group testing problem consists of determining a sparse subset of a set of items that are "defective" based on a set of possibly noisy tests, and arises in areas such as medical testing, fault detection, communication protocols, pattern matching, and database systems. We study the fundamental limits of any group testing procedure regardless of its computational complexity. In the noiseless case with the number of defective items k scaling with the total number of items p as O(pθ) (θ ∈ (0, 1)), we show that the probability of reconstruction error tends to one when n ≤ k log2p/k (1 + o(1)), but vanishes when n ≥ c(θ)k log2p/k(1 + o(1)), for some explicit constant c(θ). For θ ≤ 1/3, we show that c(θ) = 1, thus providing an exact threshold on the required number measurements, i.e. a phase transition, which was previously known only in the limit as θ → 0. Analogous necessary and sufficient conditions are derived for the noisy setting, and also for a relaxed partial recovery criterion.This work was supported in part by the European Commission under Grant ERC Future Proof, SNF 200021-146750 and SNF CRSII2-147633. The first author acknowledges support from the 'EPFL Fellows' fellowship programme co-funded by Marie Sklodowska-Curie, Horizon2020 Grant agreement no. 665667.

101 citations


Journal ArticleDOI
18 Apr 2016-EPL
TL;DR: In this article, the authors derive the first and second law for resetting processes far from equilibrium and derive a bound on the amount of work required to maintain a resetting process.
Abstract: Stochastic dynamics with random resetting leads to a non-equilibrium steady state. Here, we consider the thermodynamics of resetting by deriving the first and second law for resetting processes far from equilibrium. We identify the contributions to the entropy production of the system which arise due to resetting and show that they correspond to the rate with which information is either erased or created. Using Landauer's principle, we derive a bound on the amount of work that is required to maintain a resetting process. We discuss different regimes of resetting, including a Maxwell demon scenario where heat is extracted from a bath at constant temperature.

100 citations


Journal Article
TL;DR: In this article, the authors established a new upper bound on the number of samples sufficient for PAC learning in the realizable case, which matches known lower bounds up to numerical constant factors, and solved a long-standing open problem on the sample complexity of PAC learning.
Abstract: This work establishes a new upper bound on the number of samples sufficient for PAC learning in the realizable case. The bound matches known lower bounds up to numerical constant factors. This solves a long-standing open problem on the sample complexity of PAC learning. The technique and analysis build on a recent breakthrough by Hans Simon.

Journal ArticleDOI
TL;DR: Based on the augmented Lagrange method, a distributed primal-dual algorithm with a projection operation included is proposed in this paper, where the local estimates derived at all agents asymptotically reach a consensus at an optimal solution.

Journal ArticleDOI
TL;DR: A general model where it is only assumed that each observation y_i may depend on a_i only through , which leads to the intriguing conclusion that in the high noise regime, an unknown non-linearity in the observations does not significantly reduce one's ability to determine the signal, even when the non- linearity may be non-invertible.
Abstract: Author(s): Plan, Yaniv; Vershynin, Roman; Yudovina, Elena | Abstract: Consider measuring an n-dimensional vector x through the inner product with several measurement vectors, a_1, a_2, ..., a_m. It is common in both signal processing and statistics to assume the linear response model y_i = + e_i, where e_i is a noise term. However, in practice the precise relationship between the signal x and the observations y_i may not follow the linear model, and in some cases it may not even be known. To address this challenge, in this paper we propose a general model where it is only assumed that each observation y_i may depend on a_i only through . We do not assume that the dependence is known. This is a form of the semiparametric single index model, and it includes the linear model as well as many forms of the generalized linear model as special cases. We further assume that the signal x has some structure, and we formulate this as a general assumption that x belongs to some known (but arbitrary) feasible set K. We carefully detail the benefit of using the signal structure to improve estimation. The theory is based on the mean width of K, a geometric parameter which can be used to understand its effective dimension in estimation problems. We determine a simple, efficient two-step procedure for estimating the signal based on this model -- a linear estimation followed by metric projection onto K. We give general conditions under which the estimator is minimax optimal up to a constant. This leads to the intriguing conclusion that in the high noise regime, an unknown non-linearity in the observations does not significantly reduce one's ability to determine the signal, even when the non-linearity may be non-invertible. Our results may be specialized to understand the effect of non-linearities in compressed sensing.

Journal ArticleDOI
TL;DR: In this paper, the authors considered a diffuse interface model which describes the motion of an incompressible isothermal mixture of two immiscible fluids and showed the weak-strong uniqueness in the case of viscosity depending on the order parameter.
Abstract: We consider a diffuse interface model which describes the motion of an incompressible isothermal mixture of two immiscible fluids. This model consists of the Navier–Stokes equations coupled with a convective nonlocal Cahn–Hilliard equation. Several results were already proven by two of the present authors. However, in the two-dimensional case, the uniqueness of weak solutions was still open. Here we establish such a result even in the case of degenerate mobility and singular potential. Moreover, we show the weak–strong uniqueness in the case of viscosity depending on the order parameter, provided that either the mobility is constant and the potential is regular or the mobility is degenerate and the potential is singular. In the case of constant viscosity, on account of the uniqueness results, we can deduce the connectedness of the global attractor whose existence was obtained in a previous paper. The uniqueness technique can be adapted to show the validity of a smoothing property for the difference of two trajectories which is crucial to establish the existence of an exponential attractor. The latter is established even in the case of variable viscosity, constant mobility and regular potential.

Journal ArticleDOI
TL;DR: A variant of the bounded differences inequality which can be used to establish concentration of functions f(X) where (i) the typical changes are small, although (ii) the worst case changes might be very large, is proved.
Abstract: Concentration inequalities are fundamental tools in probabilistic combinatorics and theoretical computer science for proving that functions of random variables are typically near their means. Of particular importance is the case where f(X) is a function of independent random variables X = (X 1, . . ., Xn ). Here the well-known bounded differences inequality (also called McDiarmid's inequality or the Hoeffding–Azuma inequality) establishes sharp concentration if the function f does not depend too much on any of the variables. One attractive feature is that it relies on a very simple Lipschitz condition (L): it suffices to show that |f(X) − f(X′)| ⩽ ck whenever X, X′ differ only in Xk . While this is easy to check, the main disadvantage is that it considers worst-case changes ck , which often makes the resulting bounds too weak to be useful. In this paper we prove a variant of the bounded differences inequality which can be used to establish concentration of functions f(X) where (i) the typical changes are small, although (ii) the worst case changes might be very large. One key aspect of this inequality is that it relies on a simple condition that (a) is easy to check and (b) coincides with heuristic considerations as to why concentration should hold. Indeed, given an event Γ that holds with very high probability, we essentially relax the Lipschitz condition (L) to situations where Γ occurs. The point is that the resulting typical changes ck are often much smaller than the worst case ones. To illustrate its application we consider the reverse H-free process, where H is 2-balanced. We prove that the final number of edges in this process is concentrated, and also determine its likely value up to constant factors. This answers a question of Bollobas and Erdős.

Journal Article
TL;DR: The celebrated PSPACE theorem allows an all-powerful but untrusted pro... as discussed by the authors, which is the basis for the proposed PSPACE-based PSPACE algorithm, to be used in this paper.
Abstract: The celebrated ${\\sf IP}={\\sf PSPACE}$ theorem [Lund, Fortnow, Karloff, and Nisan, J. ACM, 39 (1992), pp. 859--868; Shamir, J. ACM, 39 (1992), pp. 869--877] allows an all-powerful but untrusted pro...

Journal ArticleDOI
TL;DR: This work shows a simple approach to extract the short-ranged Kirkwood g-factor from molecular dynamics (MD) simulation by superposing the outcomes of constant electric field E and constant electric displacement D simulations and computed the bulk dielectric constant of liquid water modeled in the generalized gradient approximation (PBE) to density functional theory.
Abstract: In his classic 1939 paper, Kirkwood linked the macroscopic dielectric constant of polar liquids to the local orientational order as measured by the g-factor (later named after him) and suggested that the corresponding dielectric constant at short-range is effectively equal to the macroscopic value just after “a distance of molecular magnitude” [Kirkwood, J. Chem. Phys., 1939, 7, 911]. Here, we show a simple approach to extract the short-ranged Kirkwood g-factor from molecular dynamics (MD) simulation by superposing the outcomes of constant electric field E and constant electric displacement D simulations [Zhang and Sprik, Phys. Rev. B: Condens. Matter Mater. Phys., 2016, 93, 144201]. Rather than from the notoriously slow fluctuations of the dipole moment of the full MD cell, the dielectric constant can now be estimated from dipole fluctuations at short-range, accelerating the convergence. Exploiting this feature, we computed the bulk dielectric constant of liquid water modeled in the generalized gradient ...

Posted Content
TL;DR: In this article, instead of treating mismatches as the source of ill performance, the authors take them as design parameters and show that by introducing such a pair of parameters per distance constraint, distributed controller achieving simultaneously both formation and motion control can be designed that not only encompasses the popular gradient control, but also allows to achieve constant collective translation, rotation or their combination while guaranteeing asymptotically no distortion in the formation shape occurs.
Abstract: Recently it has been reported that range-measurement inconsistency, or equivalently mismatches in prescribed inter-agent distances, may prevent the popular gradient controllers from guiding rigid formations of mobile agents to converge to their desired shape, and even worse from standing still at any location. In this paper, instead of treating mismatches as the source of ill performance, we take them as design parameters and show that by introducing such a pair of parameters per distance constraint, distributed controller achieving simultaneously both formation and motion control can be designed that not only encompasses the popular gradient control, but more importantly allows us to achieve constant collective translation, rotation or their combination while guaranteeing asymptotically no distortion in the formation shape occurs. Such motion control results are then applied to (a) the alignment of formations orientations and (b) enclosing and tracking a moving target. Besides rigorous mathematical proof, experiments using mobile robots are demonstrated to show the satisfying performances of the proposed formation-motion distributed controller.

Journal ArticleDOI
TL;DR: In this paper, the authors established the global existence of strong and weak solutions to the two-dimensional barotropic compressible Navier-Stokes equations with no restrictions on the size of initial data provided the shear viscosity is a positive constant and the bulk one is λ = ρ β with β > 4 / 3.

Journal ArticleDOI
TL;DR: These estimates deliver a global error upper bound with constant one and, up to data oscillation, error lower bounds on element patches with a generic constant dependent only on the mesh regularity and with a computable bound.
Abstract: We devise and study experimentally adaptive strategies driven by a posteriori error estimates to select automatically both the space mesh and the polynomial degree in the numerical approximation of diffusion equations in two space dimensions. The adaptation is based on equilibrated flux estimates. These estimates are presented here for inhomogeneous Dirichlet and Neumann boundary conditions, for spatially-varying polynomial degree, and for mixed rectangular-triangular grids possibly containing hanging nodes. They deliver a global error upper bound with constant one and, up to data oscillation, error lower bounds on element patches with a generic constant only dependent on the mesh regularity and with a computable bound. We numerically asses the estimates and several hp-adaptive strategies using the interior penalty discontinuous Galerkin method. Asymptotic exactness is observed for all the symmetric, nonsymmetric (odd degrees), and incomplete variants on non-nested unstructured triangular grids for a smooth solution and uniform refinement. Exponential convergence rates are reported on nonmatching triangular grids for the incomplete version on several benchmarks with a singular solution and adaptive refinement.

Posted Content
TL;DR: This paper develops fast stochastic algorithms that provably converge to a stationary point for constant minibatches and proves global linear convergence rate for an interesting subclass of nonsmooth nonconvex functions, that subsumes several recent works.
Abstract: We analyze stochastic algorithms for optimizing nonconvex, nonsmooth finite-sum problems, where the nonconvex part is smooth and the nonsmooth part is convex Surprisingly, unlike the smooth case, our knowledge of this fundamental problem is very limited For example, it is not known whether the proximal stochastic gradient method with constant minibatch converges to a stationary point To tackle this issue, we develop fast stochastic algorithms that provably converge to a stationary point for constant minibatches Furthermore, using a variant of these algorithms, we show provably faster convergence than batch proximal gradient descent Finally, we prove global linear convergence rate for an interesting subclass of nonsmooth nonconvex functions, that subsumes several recent works This paper builds upon our recent series of papers on fast stochastic methods for smooth nonconvex optimization [22, 23], with a novel analysis for nonconvex and nonsmooth functions

Journal ArticleDOI
TL;DR: In this paper, it was shown that a_f(P) = d_f whenever the orbit of P is Zariski dense, where the implied constant depends only on X, h_X, f, and e.
Abstract: Let f : X --> X be a dominant rational map of a projective variety defined over a global field, let d_f be the dynamical degree of f, and let h_X be a Weil height on X relative to an ample divisor. We prove that h_X(f^n(P)) << (d_f + e)^n h_X(P), where the implied constant depends only on X, h_X, f, and e. As applications, we prove a fundamental inequality a_f(P) \le d_f for the upper arithmetic degree and we construct canonical heights for (nef) divisors. We conjecture that a_f(P) = d_f whenever the orbit of P is Zariski dense, and we describe some cases for which we can prove our conjecture.

Posted Content
TL;DR: In this paper, the authors show that the Motley-Morley method yields a convex path when the normal income path is concave and the true optimal path is postulated as constant.
Abstract: I also agree that both are at best only approximations to actual lifetime incomes. Fortunately some of the errors in such an approximation may disappear since the same types of biases are apt to appear in both W and W*, and onlv the ratio of W to W* is used in the analysis. I find the Motley-Morley method of calculating optimum life-time incomes interesting, but I am puzzled as to why their model becomes the Thurow technique when it yields incorrect results. They show that equation (7) yields a convex path when the normal income path is concave and the true optimal path is postulated as constant. I agree with the conclusions, but equation (7) is not the estimating equation which I used. It is even labelled an alternative model. Thus, I can hardly consider the incorrect results a critique of my estimating equation. A Growth Model of International Direct Investment

Journal ArticleDOI
TL;DR: It is shown that the minimum wave speed of traveling waves for the three-dimensional non-monotonic system can be derived from its linearizaion at the initial disease-free equilibrium.
Abstract: We study the existence and nonexistence of traveling waves of a general diffusive Kermack–McKendrick SIR model with standard incidence where the total population is not constant. The three classes, susceptible S, infected I and removed R, are all involved in the traveling wave solutions. We show that the minimum wave speed of traveling waves for the three-dimensional non-monotonic system can be derived from its linearizaion at the initial disease-free equilibrium. The proof in this paper is based on Schauder fixed point theorem and Laplace transform. Our study provides a promising method to deal with high dimensional epidemic models.

Posted Content
TL;DR: This paper provides the first implementation of a protocol from the LEGO family that has a constant number of rounds and is optimized for the offline/online setting with function-independent preprocessing and is significantly more efficient than previous protocols when considering a high latency network.
Abstract: Secure two-party computation (S2PC) allows two parties to compute a function on their joint inputs while leaking only the output of the function. At TCC 2009 Orlandi and Nielsen proposed the LEGO protocol for maliciously secure 2PC based on cut-and-choose of Yao’s garbled circuits at the gate level and showed that this is asymptotically more efficient than on the circuit level. Since then the LEGO approach has been improved upon in several theoretical works, but never implemented. In this paper we describe further concrete improvements and provide the first implementation of a protocol from the LEGO family. Our protocol has a constant number of rounds and is optimized for the offline/online setting with function-independent preprocessing. We have benchmarked our prototype and find that our protocol can compete with all existing implementations and that it is often more efficient. As an example, in a LAN setting we can evaluate an AES-128 circuit with online latency down to 1.13ms, while if evaluating 128 AES-128 circuits in parallel the amortized cost is 0.09ms per AES-128. This online performance does not come at the price of offline inefficiency as we achieve comparable performance to previous, less general protocols, and significantly better if we ignore the cost of the function-independent preprocessing. Also, as our protocol has an optimal 2-round online phase it is significantly more efficient than previous protocols when considering a high latency network. Keywords—Secure Two-party Computation, Implementation, LEGO, XOR-Homomorphic Commitments, Selective OT-Attack

Posted Content
TL;DR: This work proposes a novel, efficient approach for distributed sparse learning in high-dimensions, where observations are randomly partitioned across machines, and provably matches the estimation error bound of centralized methods within constant rounds of communications.
Abstract: We propose a novel, efficient approach for distributed sparse learning in high-dimensions, where observations are randomly partitioned across machines. Computationally, at each round our method only requires the master machine to solve a shifted ell_1 regularized M-estimation problem, and other workers to compute the gradient. In respect of communication, the proposed approach provably matches the estimation error bound of centralized methods within constant rounds of communications (ignoring logarithmic factors). We conduct extensive experiments on both simulated and real world datasets, and demonstrate encouraging performances on high-dimensional regression and classification tasks.

Journal ArticleDOI
TL;DR: In this paper, the structure constant for three heavy operators is computed in the semiclassical limit, with the scale set by the inverse length of the operators playing the role of the Planck constant.
Abstract: We develop analytical methods for computing the structure constant for three heavy operators, starting from the recently proposed hexagon approach. Such a structure constant is a semiclassical object, with the scale set by the inverse length of the operators playing the role of the Planck constant. We reformulate the hexagon expansion in terms of multiple contour integrals and recast it as a sum over clusters generated by the residues of the measure of integration. We test the method on two examples. First, we compute the asymptotic three-point function of heavy fields at any coupling and show the result in the semiclassical limit matches both the string theory computation at strong coupling and the tree-level results obtained before. Second, in the case of one non-BPS and two BPS operators at strong coupling we sum up all wrapping corrections associated with the opposite bridge to the non-trivial operator, or the "bottom" mirror channel. We also give an alternative interpretation of the results in terms of a gas of fermions and show that they can be expressed compactly as an operator-valued super-determinant.

Journal ArticleDOI
18 Oct 2016-Collabra
TL;DR: The authors examined N400 modulation associated with independent manipulations of predictability and congruity in an adjective-noun paradigm that allows us to precisely control predictability through corpus counts.
Abstract: Previous work has shown that the N400 ERP component is elicited by all words, whether presented in isolation or in structured contexts, and that its amplitude is modulated by semantic association and contextual predictability. What is less clear is the extent to which the N400 response is modulated by semantic incongruity when predictability is held constant. In the current study we examine N400 modulation associated with independent manipulations of predictability and congruity in an adjective-noun paradigm that allows us to precisely control predictability through corpus counts. Our results demonstrate small N400 effects of semantic congruity (yellow bag vs. innocent bag), and much more robust N400 effects of predictability (runny nose vs. dainty nose) under the same conditions. These data argue against unitary N400 theories according to which N400 effects of both predictability and incongruity reflect a common process such as degree of integration difficulty, as large N400 effects of predictability were observed in the absence of large N400 effects of incongruity. However, the data are consistent with some versions of unitary ‘facilitated access’ N400 theories, as well as multiple-generator accounts according to which the N400 can be independently modulated by facilitated conceptual/lexical access (as with predictability) and integration difficulty (as with incongruity, perhaps to a greater extent in full sentential contexts).

Journal ArticleDOI
TL;DR: In this article, the authors identify photon surfaces and anti-photon surfaces in some physically interesting spacetimes, which are not spherically symmetric, and they solve physically reasonable field equations, including for some cases the vacuum Einstein equations, albeit they are not asymptotic flat.

Journal ArticleDOI
TL;DR: It is shown that this weighted difference and its combination for two different elements can be used to extract a value for the fine-structure constant from near-future bound-electron g factor experiments with an accuracy competitive with or better than the present literature value.
Abstract: A weighted difference of the g factors of the H- and Li-like ions of the same element is theoretically studied and optimized in order to maximize the cancellation of nuclear effects between the two charge states. We show that this weighted difference and its combination for two different elements can be used to extract a value for the fine-structure constant from near-future bound-electron g factor experiments with an accuracy competitive with or better than the present literature value.