scispace - formally typeset
Search or ask a question
Author

Raymond J. Nelson

Bio: Raymond J. Nelson is an academic researcher. The author has contributed to research in topics: Truth function & Boolean algebra. The author has an hindex of 6, co-authored 19 publications receiving 187 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper develops a method for both disjunctive and conjunctive normal truth functions which is in some respects similar to Quine's but which does not involve prior expansion of a formula into developed normal form.
Abstract: In [1] Quine has presented a method for finding the simplest disjunctive normal forms of truth functions. Like the tabular methods of [2] and [3], Quine's method requires expansion of a formula into developed normal form as a preliminary step. This aspect of his method to a certain extent defeats one of the purposes of a mechanical method, which is to secure simplest forms in complicated cases (perhaps by using a digital computer) [4]. In the present paper we develop a method for both disjunctive and conjunctive normal truth functions which is in some respects similar to Quine's but which does not involve prior expansion of a formula into developed normal form. Familiarity with [1] is presupposed. We use the notations and conventions of [1] with the following exceptions and additions. ‘Φ’ names any formula, ‘Ψ’ any conjunction of literals, and ‘χ’ any disjunction of literals. Any disjunction of conjunctions of literals is a disjunctive normal formula and is designated by ‘ψ’; any conjunction of disjunctions of literals is a conjunctive normal formula and is designated by ‘X’. Note that we do not make use of Quine's notion of fundamental formulas. A formula Ψ occurring in a disjunctive normal formula ψ, provided it is a disjunct of ψ, is a clause ; similarly for χ. We use ‘≠” for logical equivalence of formulas and ‘=’ for identity of formulas to within the order of literals in clauses and the order of clauses in normal formulas.

96 citations

Journal ArticleDOI
TL;DR: In this paper, it is shown that X is a weak simplest conjunctive normal equivalent of X under the same hypothesis, and X is also a strong simplification of X if and only if X is simpler than X.
Abstract: In certain applications of truth-functional logic it is of interest to determine classes of formulas equivalent to a given formula Φ under the hypothesis that certain conjunctions of letters of Φ are always false. Of especial interest is the case where the class to be determined is that of simplest normal truth functions. The problem of giving a calculation procedure for this question is evidently a more general form of the simplification problem as considered in [1] and [2]. The purpose of this note is to indicate how the procedure of [1] applies.Let Π, Π′, Π″, … be fundamental formulas, in the sense of [2], such that every literal of Π, etc., is a literal of Φ and such that Π, Π′, Π″, … are all false for all values of the constituent literals. Then we say that Ψ is a weak simplest disjunctive normal equivalent of Φ if and only if Ψ ≡ Φ, under the hypothesis, and there is no Ψ′ such that Ψ′ is simpler than Ψ and Ψ′ ≡ Φ, under the hypothesis. Similarly, under exactly the same hypothesis, we say that X is a weak simplest conjunctive normal equivalent of Φ. When the class of always false fundamental formulas is empty, as in [1] and [2], we may speak of strong simplest forms and hence of the more general problem of strong simplification.

26 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: It is proved that almost every problem decidable in exponential space has essentially maximum circuit-size and space-bounded Kolmogorov complexity almost everywhere, and it is shown that infinite pseudorandom sequences have high nonuniform complexityalmost everywhere.

291 citations

Journal ArticleDOI
Dag Prawitz1
11 Feb 2008-Theoria

159 citations

Proceedings ArticleDOI
01 Apr 2019
TL;DR: This paper reverse engineer the structure of the directory in a sliced, non-inclusive cache hierarchy, and proves that the directory can be used to bootstrap conflict-based cache attacks on the last-level cache.
Abstract: Although clouds have strong virtual memory isolation guarantees, cache attacks stemming from shared caches have proved to be a large security problem. However, despite the past effectiveness of cache attacks, their viability has recently been called into question on modern systems, due to trends in cache hierarchy design moving away from inclusive cache hierarchies. In this paper, we reverse engineer the structure of the directory in a sliced, non-inclusive cache hierarchy, and prove that the directory can be used to bootstrap conflict-based cache attacks on the last-level cache. We design the first cross-core Prime+Probe attack on non-inclusive caches. This attack works with minimal assumptions: the adversary does not need to share any virtual memory with the victim, nor run on the same processor core. We also show the first high-bandwidth Evict+Reload attack on the same hardware. We demonstrate both attacks by extracting key bits during RSA operations in GnuPG on a state-of-the-art non-inclusive Intel Skylake-X server.

143 citations

Book ChapterDOI
Pierre Marquis1
01 Jan 2000
TL;DR: In this section, the notion of consequence finding is introduced and motivated in informal terms and the scope of the chapter and its organization are successively pointed out.
Abstract: In this section, the notion of consequence finding is introduced and motivated in informal terms. Then, the scope of the chapter and its organization are successively pointed out.

127 citations

Journal ArticleDOI
TL;DR: The proposed area model is based on transforming the given, multi-output Boolean function description into an equivalent single-output function, and is empirical, and results demonstrating its feasibility and utility are presented.
Abstract: High-level power estimation, when given only a high-level design specification such as a functional or register-transfer level (RTL) description, requires high-level estimation of the circuit average activity and total capacitance. Considering that total capacitance is related to circuit area, this paper addresses the problem of computing the "area complexity" of multi-output combinational logic given only their functional description, i.e., Boolean equations, where area complexity refers to the number of gates required for an optimal multilevel implementation of the combinational logic. The proposed area model is based on transforming the multi-output Boolean function description into an equivalent single output function. The area model is empirical and results demonstrating its feasibility and utility are presented. Also, a methodology for converting the gate count estimates, obtained from the area model, into capacitance estimates is presented. High-level power estimates based on the total capacitance estimates and average activity estimates are also presented.

119 citations