scispace - formally typeset
Search or ask a question
Author

Roger Penrose

Bio: Roger Penrose is an academic researcher from University of Oxford. The author has contributed to research in topics: General relativity & Quantum gravity. The author has an hindex of 78, co-authored 201 publications receiving 39379 citations. Previous affiliations of Roger Penrose include University College London & King's College London.


Papers
More filters
Journal Article
TL;DR: In this paper, Hameroff and Penrose discuss the possibility of errors in human or robot mathematical reasoning, showing for example how they have seriously misunderstood what they refer to as "physiological evidence" regarding to effects of the drug colchicine.
Abstract: Grush and Churchland (1995) attempt to address aspects of the proposal that we have been making concerning a possible physical mechanism underlying the phenomenon of consciousness. Unfortunately, they employ arguments that are highly misleading and, in some important respects, factually incorrect. Their article ‘Gaps in Penrose’s Toilings’ is addressed specifically at the writings of one of us (Penrose), but since the particular model they attack is one put forward by both of us (Hameroff and Penrose, 1995; 1996), it is appropriate that we both reply; but since our individual remarks refer to different aspects of their criticism we are commenting on their article separately. The logical arguments discussed by Grush and Churchland, and the related physics are answered in Part l by Penrose, largely by pointing out precisely where these arguments have already been treated in detail in Shadows of the Mind (Penrose, 1994). In Part 2, Hameroff replies to various points on the biological side, showing for example how they have seriously misunderstood what they refer to as ‘physiological evidence’ regarding to effects of the drug colchicine. The reply serves also to discuss aspects of our model ‘orchestrated objective reduction in brain microtubules – Orch OR’ which attempts to deal with the serious problems of consciousness more directly and completely than any previous theory. Part 1: The Relevance of Logic and Physics Logical arguments It has been argued in the books by one of us, The Emperor’s New Mind (Penrose, 1989 – henceforth Emperor) and Shadows of the Mind (Penrose, 1994 – henceforth Shadows) that Gödel’s theorem shows that there must be something non–computational involved in mathematical thinking. The Grush and Churchland (1995 – henceforth G&C) discussion attempts to dismiss this argument from Gödel’s theorem on certain grounds. However, the main points that they put forward are ones which have been amply addressed in Shadows. It is very hard to understand how G&C can make the claims that they do without giving any indication that virtually all their points are explicitly taken into account in Shadows. It might be the case that the arguments given in Shadows are in some respects inadequate, and it would have been interesting if G&C had provided a detailed commentary on these particular arguments, pointing out possible shortcomings where they occur. But it would seem from what G&C say that they have not even read, and certainly not understood, these arguments. A natural reaction to their commentary would be simply to say \"go and read the book and come back when you have understood its arguments.\" However, it will be helpful to pinpoint the specific issues that they raise here, and to point out the places in Shadows where these issues are addressed. The main argument that they appear to be raising against Penrose’s (1989; 1994) use of Gödel’s theorem (to demonstrate non–computability in mathematical thinking) is that mathematical thinking contains errors. They give the impression that the possibility of errors by mathematicians is not even considered by Penrose. However, in §§3.2, 3.4, 3.17, 3.19, 3.20 and 3.21 of Shadows the question of possible errors in human or robot mathematical reasoning is explicitly addressed at length. (The words ‘errors’ and ‘erroneous’ even appear explicitly in the headings of two of those sections and it is hard to see why G&C make no reference to these parts of the book.) In addition, on page 16 of their commentary, G&C claim that ‘most of the technical machinery’ involved in Penrose’s arguments refer to what they call ‘Ala’ and ‘Alc’, on their page 15, which they choose not to dispute; whereas in fact by far the most difficult technical arguments given in Shadows are those which specifically address the possibility of errors in human or robot mathematical reasoning (these are given in §§ 3.19 and 3.20 of Shadows). It is difficult to understand why G&C fail to refer to this discussion, seeming to suggest (quite incorrectly) that Penrose has an in-built faith in the complete accuracy in the reasoning of mathematicians! G&C have a curious way of formalizing what they believe to be the ingredients of Penrose’s arguments. In particular, on page 16 they refer to ‘Penrose’s Premise A 1: Human thought, at least in some instances, perhaps in all, is sound, yet non-algorithmic’ (which they break down into A I a, . . . , A 1 e). Their ‘Premise A 1’ is nowhere to be found in Penrose’s writings. It is fully admitted by Penrose that actual human thinking can be unsound even when seeming to be carried out in the most rigorous fashion by mathematicians. It may well be that there is a genuine and deep misunderstanding implicit in what G&C are attempting to say, and it may be helpful to try to clarify the issue here. For the purposes of our present discussion (and for the essential discussion given in Shadows) it will be sufficient to restrict attention to a very specific class of mathematical statements, namely those referred to as \"pi 1\"–sentences. Such sentences are assertions that particular (Turing–machine) computations do not halt. There are some very famous examples of mathematical assertions which take the form of \"pi 1\"– sentences, the best known being the so–called ‘Fermat’s Last Theorem’. Other examples are ‘Goldbach’s conjecture’ (still unproved) that every even number greater than 2 is the sum of two primes, Lagrange’s Theorem that every natural number is the sum of four squares, and the famous 4–colour theorem. It is useful to concentrate one’s attention on \"pi 1\"–sentences because this is all one needs for application of the Gödel argument to the issue of computability in human mathematical thinking. There is no relevant issue of dispute between mathematicians as to the meaningfulness and objectivity of the truth of such sentences. (One might, however, worry about the ‘intuitionists’ or other constructivists in this context –and some reference to such viewpoints is given on p. 18 and footnote 30 on p. 20 of the G&C article. However, such constructivist viewpoints do not evade the Gödel argument and the use made of it in Shadows as is explicitly addressed in the discussion of Q9 on page 87 of Shadows, a discussion not even referred to by G&C.) As far as we can make out, G&C are not disputing the absolute (‘Platonic’) nature of the truth or falsity of explicit \"pi 1\"–sentences. The issue is the accessibility of the truth of \"pi 1\"–sentences by human reasoning and insight. We should make clear what is meant by a word such as ‘accessibility’ in this context, since there seem to be a great many misconceptions by philosophers and others as to how mathematical understanding actually operates. It is not a question of some kind of ‘mystical intuition’ that (some) mathematicians might have, and which is unavailable to ordinary mortals. What is being referred to by ‘access’ is simply the normal procedure of mathematical proof. It is not even a question of how some mathematician might have the inspiration to arrive at a proof. It is merely the question of the understanding which is involved in the ability to follow a proof in principle. (See, in particular, in the response to Q12 pp. 101 3 of Shadows.) However, it should be made clear in this context that the word ‘proof’ does not refer necessarily to a formalized argument within some pre-assigned logical scheme. For example, the arguments given by Andrew Wiles (as completed by Taylor and Wiles) to demonstrate the validity of Fermat’s last theorem were certainly not presented as formal arguments, within, say, the Zermelo–Fraenkel axiom system. The essential point about such arguments is that they have to be correct as mathematical reasoning. It is a secondary matter to try to find out within which formal mathematical systems such arguments can be formulated. Indeed, what the Gödel argument shows (and this is not in dispute) is that if the rules of some formal system, F, can be trusted as providing correct demonstrations of mathematical statements —and here we need restrict attention only to \"pi 1\"– sentences—then the particular \"pi 1\"–sentence G(F) must also be accepted as true even though it is not a consequence of the very rules provided by F. (Here the sentence G(F) is the Gödel proposition which asserts the consistency of the formal system F—assuming that F is sufficiently extensive. It can also be taken as the explicit statement Ck(k) exhibited on p. 75 of Shadows. What this shows is that mathematical understanding (i.e. mathematical proof-in the sense above) cannot be encapsulated in any humanly acceptable formal system. Here ‘acceptable’ means acceptable to mathematicians as a reliable means of obtaining mathematical truths, where attention may be restricted to the truth of \"pi 1\"–sentences. The notion of ‘proof’ that is being referred to above certainly raises profound issues. However, it would be unreasonable to dismiss it as something which is too ill–defined for scientific consideration or perhaps ‘mystical’. There is indeed something mysterious about the very nature of ‘understanding’ and this is what is involved here. But the notion of proof that is involved in mathematical understanding is extraordinarily precise and accurate. There is no other form of argument within science or philosophy which really bears comparison with it. Moreover, this notion transcends any individual mathematician. But it is what mathematicians individually strive for. If one mathematician claims to have an argument for demonstrating the validity of some assertion—say a \"pi 1\"–sentence—then it should in principle be possible to convince another mathematician that the argument, and hence the conclusion, is correct unless there is an error, in which case it is up to the mathematicians to locate this error. There is no question but that mathematicians do, not infrequently, make errors. This is not the point. The point is that it is possible for there actu

90 citations

Journal ArticleDOI
TL;DR: In this article, it was shown that the 10 gravitationallyconserved quantities defined in asymptotically flat, empty, space times are, when suitably modified, also conserved in space-time Einstein-Maxwell space times.
Abstract: It is shown that the 10 gravitationally‐conserved quantities defined in asymptotically flat, empty, space‐times are, when suitably modified, also conserved in asymptotically flat Einstein‐Maxwell space‐times. Furthermore, the implied selection rules for transitions between stationary Einstein‐Maxwell states are the same as those in the pure gravitational case.

90 citations

Journal ArticleDOI

87 citations

Journal ArticleDOI
TL;DR: In this article, the authors used the first few months of observations of the recently launched satellite LARES to verify the geodesic motion of a small, structureless test-particle.
Abstract: The discovery of the accelerating expansion of the Universe, thought to be driven by a mysterious form of “dark energy” constituting most of the Universe, has further revived the interest in testing Einstein’s theory of General Relativity. At the very foundation of Einstein’s theory is the geodesic motion of a small, structureless test-particle. Depending on the physical context, a star, planet or satellite can behave very nearly like a test-particle, so geodesic motion is used to calculate the advance of the perihelion of a planet’s orbit, the dynamics of a binary pulsar system and of an Earth-orbiting satellite. Verifying geodesic motion is then a test of paramount importance to General Relativity and other theories of fundamental physics. On the basis of the first few months of observations of the recently launched satellite LARES, its orbit shows the best agreement of any satellite with the test-particle motion predicted by General Relativity. That is, after modelling its known non-gravitational perturbations, the LARES orbit shows the smallest deviations from geodesic motion of any artificial satellite: its residual mean acceleration away from geodesic motion is less than $\ensuremath 0.5\times10^{-12}$ m/s^2. LARES-type satellites can thus be used for accurate measurements and for tests of gravitational and fundamental physics. Already with only a few months of observation, LARES provides smaller scatter in the determination of several low-degree geopotential coefficients (Earth gravitational deviations from sphericity) than available from observations of any other satellite or combination of satellites.

87 citations

Journal ArticleDOI
TL;DR: In this paper, the authors examined concentric sets of low-variance circular rings in the WMAP data, finding a highly non-isotropic distribution, which is consistent with CCC's expectations.
Abstract: A new analysis of the CMB, using WMAP data, supports earlier indications of non-Gaussian features of concentric circles of low temperature variance. Conformal cyclic cosmology (CCC) predicts such features from supermassive black-hole encounters in an aeon preceding our Big Bang. The significance of individual low-variance circles in the true data has been disputed; yet a recent independent analysis has confirmed CCC's expectation that CMB circles have a non-Gaussian temperature distribution. Here we examine concentric sets of low-variance circular rings in the WMAP data, finding a highly non-isotropic distribution. A new “sky-twist” procedure, directly analysing WMAP data, without appeal to simulations, shows that the prevalence of these concentric sets depends on the rings being circular, rather than even slightly elliptical, numbers dropping off dramatically with increasing ellipticity. This is consistent with CCC's expectations; so also is the crucial fact that whereas some of the rings' radii are found to reach around 15° , none exceed 20° . The non-isotropic distribution of the concentric sets may be linked to previously known anomalous and non-Gaussian CMB features.

87 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, it is shown that quantum mechanical effects cause black holes to create and emit particles as if they were hot bodies with temperature, which leads to a slow decrease in the mass of the black hole and to its eventual disappearance.
Abstract: In the classical theory black holes can only absorb and not emit particles. However it is shown that quantum mechanical effects cause black holes to create and emit particles as if they were hot bodies with temperature\(\frac{{h\kappa }}{{2\pi k}} \approx 10^{ - 6} \left( {\frac{{M_ \odot }}{M}} \right){}^ \circ K\) where κ is the surface gravity of the black hole. This thermal emission leads to a slow decrease in the mass of the black hole and to its eventual disappearance: any primordial black hole of mass less than about 1015 g would have evaporated by now. Although these quantum effects violate the classical law that the area of the event horizon of a black hole cannot decrease, there remains a Generalized Second Law:S+1/4A never decreases whereS is the entropy of matter outside black holes andA is the sum of the surface areas of the event horizons. This shows that gravitational collapse converts the baryons and leptons in the collapsing body into entropy. It is tempting to speculate that this might be the reason why the Universe contains so much entropy per baryon.

10,923 citations

Journal ArticleDOI
TL;DR: The author revealed that quantum teleportation as “Quantum one-time-pad” had changed from a “classical teleportation” to an “optical amplification, privacy amplification and quantum secret growing” situation.
Abstract: Quantum cryptography could well be the first application of quantum mechanics at the individual quanta level. The very fast progress in both theory and experiments over the recent years are reviewed, with emphasis on open questions and technological issues.

6,949 citations

Journal ArticleDOI
TL;DR: In this paper, the concept of black-hole entropy was introduced as a measure of information about a black hole interior which is inaccessible to an exterior observer, and it was shown that the entropy is equal to the ratio of the black hole area to the square of the Planck length times a dimensionless constant of order unity.
Abstract: There are a number of similarities between black-hole physics and thermodynamics. Most striking is the similarity in the behaviors of black-hole area and of entropy: Both quantities tend to increase irreversibly. In this paper we make this similarity the basis of a thermodynamic approach to black-hole physics. After a brief review of the elements of the theory of information, we discuss black-hole physics from the point of view of information theory. We show that it is natural to introduce the concept of black-hole entropy as the measure of information about a black-hole interior which is inaccessible to an exterior observer. Considerations of simplicity and consistency, and dimensional arguments indicate that the black-hole entropy is equal to the ratio of the black-hole area to the square of the Planck length times a dimensionless constant of order unity. A different approach making use of the specific properties of Kerr black holes and of concepts from information theory leads to the same conclusion, and suggests a definite value for the constant. The physical content of the concept of black-hole entropy derives from the following generalized version of the second law: When common entropy goes down a black hole, the common entropy in the black-hole exterior plus the black-hole entropy never decreases. The validity of this version of the second law is supported by an argument from information theory as well as by several examples.

6,591 citations

Proceedings ArticleDOI
Lov K. Grover1
01 Jul 1996
TL;DR: In this paper, it was shown that a quantum mechanical computer can solve integer factorization problem in a finite power of O(log n) time, where n is the number of elements in a given integer.
Abstract: were proposed in the early 1980’s [Benioff80] and shown to be at least as powerful as classical computers an important but not surprising result, since classical computers, at the deepest level, ultimately follow the laws of quantum mechanics. The description of quantum mechanical computers was formalized in the late 80’s and early 90’s [Deutsch85][BB92] [BV93] [Yao93] and they were shown to be more powerful than classical computers on various specialized problems. In early 1994, [Shor94] demonstrated that a quantum mechanical computer could efficiently solve a well-known problem for which there was no known efficient algorithm using classical computers. This is the problem of integer factorization, i.e. testing whether or not a given integer, N, is prime, in a time which is a finite power of o (logN) . ----------------------------------------------

6,335 citations

Journal ArticleDOI
TL;DR: Recognition-by-components (RBC) provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition.
Abstract: The perceptual recognition of objects is conceptualized to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components, such as blocks, cylinders, wedges, and cones. The fundamental assumption of the proposed theory, recognition-by-components (RBC), is that a modest set of generalized-cone components, called geons (N £ 36), can be derived from contrasts of five readily detectable properties of edges in a two-dimensiona l image: curvature, collinearity, symmetry, parallelism, and cotermination. The detection of these properties is generally invariant over viewing position an$ image quality and consequently allows robust object perception when the image is projected from a novel viewpoint or is degraded. RBC thus provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition: The constraints toward regularization (Pragnanz) characterize not the complete object but the object's components. Representational power derives from an allowance of free combinations of the geons. A Principle of Componential Recovery can account for the major phenomena of object recognition: If an arrangement of two or three geons can be recovered from the input, objects can be quickly recognized even when they are occluded, novel, rotated in depth, or extensively degraded. The results from experiments on the perception of briefly presented pictures by human observers provide empirical support for the theory. Any single object can project an infinity of image configurations to the retina. The orientation of the object to the viewer can vary continuously, each giving rise to a different two-dimensional projection. The object can be occluded by other objects or texture fields, as when viewed behind foliage. The object need not be presented as a full-colored textured image but instead can be a simplified line drawing. Moreover, the object can even be missing some of its parts or be a novel exemplar of its particular category. But it is only with rare exceptions that an image fails to be rapidly and readily classified, either as an instance of a familiar object category or as an instance that cannot be so classified (itself a form of classification).

5,464 citations