scispace - formally typeset
Search or ask a question

Showing papers on "Conjunctive normal form published in 2021"


Book ChapterDOI
05 Jul 2021
TL;DR: ProCount as mentioned in this paper uses a graded project-join tree to compute exact literal-weighted projected model counts of propositional formulas in conjunctive normal form and achieves state-of-the-art performance.
Abstract: Recent work in weighted model counting proposed a unifying framework for dynamic-programming algorithms. The core of this framework is a project-join tree: an execution plan that specifies how Boolean variables are eliminated. We adapt this framework to compute exact literal-weighted projected model counts of propositional formulas in conjunctive normal form. Our key conceptual contribution is to define gradedness on project-join trees, a novel condition requiring irrelevant variables to be eliminated before relevant variables. We prove that building graded project-join trees can be reduced to building standard project-join trees and that graded project-join trees can be used to compute projected model counts. The resulting tool ProCount is competitive with the state-of-the-art tools \(\texttt {D4}_{\texttt {P}}\), projMC, and reSSAT, achieving the shortest solving time on 131 benchmarks of 390 benchmarks solved by at least one tool, from 849 benchmarks in total.

12 citations


Journal ArticleDOI
TL;DR: An inference approach to design an intelligent agent that executes tasks in a given environment by formulating reasoning processes as particular evolutions of Petri nets, which is efficient since its time computational complexity is proven to be polynomial with respect to the number of Boolean variables.

6 citations


Proceedings ArticleDOI
20 Aug 2021
TL;DR: AlloyMax as mentioned in this paper is an extension of Alloy with a capability to express and analyze problems with optimal solutions, and AlloyMax introduces a small addition of language constructs that can be used to specify a wide range of problems that involve optimality.
Abstract: Alloy is a declarative modeling language based on a first-order relational logic. Its constraint-based analysis has enabled a wide range of applications in software engineering, including configuration synthesis, bug finding, test-case generation, and security analysis. Certain types of analysis tasks in these domains involve finding an optimal solution. For example, in a network configuration problem, instead of finding any valid configuration, it may be desirable to find one that is most permissive (i.e., it permits a maximum number of packets). Due to its dependence on SAT, however, Alloy cannot be used to specify and analyze these types of problems. We propose AlloyMax, an extension of Alloy with a capability to express and analyze problems with optimal solutions. AlloyMax introduces (1) a small addition of language constructs that can be used to specify a wide range of problems that involve optimality and (2) a new analysis engine that leverages a Maximum Satisfiability (MaxSAT) solver to generate optimal solutions. To enable this new type of analysis, we show how a specification in a first-order relational logic can be translated into an input format of MaxSAT solvers—namely, a Boolean formula in weighted conjunctive normal form (WCNF). We demonstrate the applicability and scalability of AlloyMax on a benchmark of problems. To our knowledge, AlloyMax is the first approach to enable analysis with optimality in a relational modeling language, and we believe that AlloyMax has the potential to bring a wide range of new applications to Alloy.

4 citations


Journal ArticleDOI
TL;DR: This work devise techniques for automatic synthesis and verification of TL circuits based on constraint solving, and formulate a fundamental operation to collapse TL functions, and derive a necessary and sufficient condition of collapsibility for linear combination of two TL functions.
Abstract: Threshold logic (TL) circuits gain increasing attention due to their feasible realization with emerging technologies and strong bind to neural network applications. In this work, we devise techniques for automatic synthesis and verification of TL circuits based on constraint solving. For synthesis, we formulate a fundamental operation to collapse TL functions, and derive a necessary and sufficient condition of collapsibility for linear combination of two TL functions. An approach based on solving the subset sum problem is proposed for fast circuit transformation. For verification, we propose a procedure to convert a TL function to a multiplexer (MUX) tree and to pseudo-Boolean (PB) constraints for formal Boolean and PB reasoning, respectively. Experiments on synthesis show that the collapse operation further reduces gate counts of synthesized TL circuits by an average of 18%. Experiments on verification demonstrate good scalability of the MUX-based method for equivalence checking of synthesized TL circuits, and efficiency of PB constraint conversion in cases where the conjunctive normal form (CNF) formula conversion and MUX tree conversion suffer from memory explosion.

4 citations


Proceedings ArticleDOI
01 Feb 2021
TL;DR: In this article, a neural network-based cognitive SAT to SAT-hard clause translator is proposed under the constraints of minimal power, performance and area (PPA) overheads while preserving the original functionality with impenetrable security.
Abstract: Logic obfuscation is introduced as a pivotal defense mechanism against emerging hardware threats on Integrated Circuits (ICs) such as reverse engineering (RE) and intellectual property (IP) theft. The effectiveness of logic obfuscation is challenged by recently introduced Boolean satisfiability (SAT) attack and it's variants. A plethora of counter measures have been proposed to thwart the SAT attacks. Irrespective of the implemented defenses, large power, performance and area (PPA) overheads are seen to be indispensable. In contrast, we propose a neural network-based cognitive SAT to SAT-hard clause translator under the constraints of minimal PPA overheads while preserving the original functionality with impenetrable security. Our proposed method is incubated with a SAT-hard clause generator that translates the existing conjunctive normal form (CNF) through minimal perturbations such as inclusion of pair of inverters or buffers or adding new lightweight SAT-hard block depending on the provided CNF. For efficient SAT-hard clause generation, the proposed method is equipped with a multi-layer neural network that first learns the dependencies of features (literals and clauses), followed by a long-short-term-memory (LSTM) network to validate and backpropagate the SAT-hardness for better learning and translation. For a fair comparison with the state-of-the-art, we evaluate our proposed technique on ISCAS'85 benchmarks. It is seen to successfully defend against multiple state-of-the-art SAT attacks devised for hardware RE. In addition, we also evaluate our proposed technique's empirical performance against MiniSAT, Lingeling and Glucose SAT solvers that form the base for numerous existing deobfuscation SAT attacks.

4 citations


Journal ArticleDOI
TL;DR: In this paper, a probabilistic linear time algorithm for reasoning over conjunctive normal form (CNF) CLA formulae was presented, together with a dual-time algorithm for learning CLA statements by collecting experienced snapshots in disjunctive normal forms (DNF).
Abstract: Context logic (CL), a logical language similar in style to description logics but with a more cognitive motivation as a logical language of cognition, was developed since 2007 to provide a new approach to the symbol grounding problem, a key problem for reliable intelligent environments and other intelligent sensory systems. CL is a three-layered integrated hierarchy of languages: a relational base layer with the expressiveness of propositional logic (CLA), a quantifier-free decidable language (CL0), and an expressive language with full quantification (CL1). As was shown in 2018, the core CLA reasoning can be implemented on a variant of Kanerva’s Vector Symbolic Architecture, the activation bit vector machine (ABVM), shedding new light on the fundamental cognitive faculties of symbol grounding and imagery, but the system raised two questions: first, the core reasoning algorithm was a classical EXPTIME reasoner; second, fundamental aspects for a learning algorithm were sketched but not presented with a full algorithm. This paper addresses those two questions. We present a probabilistic linear time algorithm for reasoning over conjunctive normal form (CNF) CLA formulae together with a dual probabilistic linear time algorithm for learning CLA statements by collecting experienced snapshots in a disjunctive normal form (DNF).

3 citations


Proceedings ArticleDOI
30 May 2021
TL;DR: In this paper, a new formalization of combinatorial filter reduction is proposed, which requires only a polynomial number of constraints and characterizes these constraints in three different forms: nonlinear, linear, and conjunctive normal form.
Abstract: Reduction of combinatorial filters involves compressing state representations that robots use. Such optimization arises in automating the construction of minimalist robots. But exact combinatorial filter reduction is an NP-complete problem and all current techniques are either inexact or formalized with exponentially many constraints. This paper proposes a new formalization needing only a polynomial number of constraints, and characterizes these constraints in three different forms: nonlinear, linear, and conjunctive normal form. Empirical results show that constraints in conjunctive normal form capture the problem most effectively, leading to a method that outperforms the others. Further examination indicates that a substantial proportion of constraints remain inactive during iterative filter reduction. To leverage this observation, we introduce just-in-time generation of such constraints, which yields improvements in efficiency and has the potential to minimize large filters.

3 citations


Journal ArticleDOI
04 Mar 2021-Entropy
TL;DR: In this paper, the structural properties of propositional formulas in conjunctive normal form (CNF) by the principle of structural entropy of formulas were studied. And the experimental results showed that this strategy effectively improved the performance of the solvers CCAsat and Sparrow2011 when incorporated into these two solvers.
Abstract: The satisfiability (SAT) problem is a core problem in computer science. Existing studies have shown that most industrial SAT instances can be effectively solved by modern SAT solvers while random SAT instances cannot. It is believed that the structural characteristics of different SAT formula classes are the reasons behind this difference. In this paper, we study the structural properties of propositional formulas in conjunctive normal form (CNF) by the principle of structural entropy of formulas. First, we used structural entropy to measure the complex structure of a formula and found that the difficulty solving the formula is related to the structural entropy of the formula. The smaller the compressing information of a formula, the more difficult it is to solve the formula. Secondly, we proposed a λ-approximation strategy to approximate the structural entropy of large formulas. The experimental results showed that the proposed strategy can effectively approximate the structural entropy of the original formula and that the approximation ratio is more than 92%. Finally, we analyzed the structural properties of a formula in the solution process and found that a local search solver tends to select variables in different communities to perform the next round of searches during a search and that the structural entropy of a variable affects the probability of the variable being flipped. By using these conclusions, we also proposed an initial candidate solution generation strategy for a local search for SAT, and the experimental results showed that this strategy effectively improves the performance of the solvers CCAsat and Sparrow2011 when incorporated into these two solvers.

3 citations


Posted Content
TL;DR: In this article, the Regular many-valued Horn Non-Clausal (RH) class was proposed, which is a polynomial propositional class in non-clausal (NC) form.
Abstract: The relevance of polynomial formula classes to deductive efficiency motivated their search, and currently, a great number of such classes is known. Nonetheless, they have been exclusively sought in the setting of clausal form and propositional logic, which is of course expressively limiting for real applications. As a consequence, a first polynomial propositional class in non-clausal (NC) form has recently been proposed. Along these lines and towards making NC tractability applicable beyond propositional logic, firstly, we define the Regular many-valued Horn Non-Clausal class, or RH, obtained by suitably amalgamating both regular classes: Horn and NC. Secondly, we demonstrate that the relationship between (1) RH and the regular Horn class is that syntactically RH subsumes the Horn class but that both classes are equivalent semantically; and between (2) RH and the regular non-clausal class is that RH contains all NC formulas whose clausal form is Horn. Thirdly, we define Regular Non-Clausal Unit-Resolution, or RUR-NC , and prove both that it is complete for RH and that checks its satisfiability in polynomial time. The latter fact shows that our intended goal is reached since RH is many-valued, non-clausal and tractable. As RH and RUR-NC are, both, basic in the DPLL scheme, the most efficient in propositional logic, and can be extended to some other non-classical logics, we argue that they pave the way for efficient non-clausal DPLL-based approximate reasoning.

2 citations


Journal ArticleDOI
TL;DR: This paper designs non-trivial, parameterized and exact exponential algorithms for the problem of finding read-once resolution refutations of unsatisfiable 2CNF formulas within the resolution refutation system.
Abstract: In this paper, we discuss algorithms for the problem of finding read-once resolution refutations of unsatisfiable 2CNF formulas within the resolution refutation system. Broadly, a read-once resolution refutation is one in which each constraint (input or derived) is used at most once. Read-once resolution refutations have been widely studied in the literature for a number of constraint system-refutation system pairs. For instance, read-once resolution has been analyzed for boolean formulas in conjunctive normal form (CNF) and read-once cutting planes have been analyzed for polyhedral systems. By definition, read-once refutations are compact, and hence valuable in applications that place a premium on visualization. The satisfiability problem (SAT) is concerned with finding a satisfying assignment for a boolean formula in CNF. While SAT is NP-complete in general, there exist some interesting subclasses of CNF formulas, for which it is decidable in polynomial time. One such subclass is the class of 2CNF formulas, i.e., CNF formulas in which each clause has at most two literals. The existence of efficient algorithms for satisfiability checking in 2CNF formulas (2SAT), makes this class useful from the perspective of modeling selected logic programs. The work in this paper is concerned with the read-once refutability problem (under resolution) in this subclass. Although 2SAT is decidable in polynomial time, the problem of finding a read-once resolution refutation of an unsatisfiable 2CNF formula is NP-complete. We design non-trivial, parameterized and exact exponential algorithms for this problem. Additionally, we study the computational complexity of finding a shortest read-once resolution refutation of a 2CNF formula.

1 citations


DOI
21 Oct 2021
TL;DR: In this article, the authors aim to quantify the extent to which machine learning models have learned and generalized from the given data by translating a trained model into a C program and feeding it to the CBMC model checker to produce a formula in Conjunctive Normal Form.
Abstract: The efficacy of machine learning models is typically determined by computing their accuracy on test data sets. However, this may often be misleading, since the test data may not be representative of the problem that is being studied. With QuantifyML we aim to precisely quantify the extent to which machine learning models have learned and generalized from the given data. Given a trained model, QuantifyML translates it into a C program and feeds it to the CBMC model checker to produce a formula in Conjunctive Normal Form (CNF). The formula is analyzed with off-the-shelf model counters to obtain precise counts with respect to different model behavior. QuantifyML enables i) evaluating learnability by comparing the counts for the outputs to ground truth, expressed as logical predicates, ii) comparing the performance of models built with different machine learning algorithms (decision-trees vs. neural networks), and iii) quantifying the safety and robustness of models.

Journal ArticleDOI
TL;DR: In this paper, a belief revision process (K * ) between conjunctive forms is based on solving first the propositional inference, i.e. K =. Based on to count falsifying assignments represented by tertiary chains, an algorithmic proposal is made that allows to determine in a practical way, when (K E ( K * )) is inconsistent.
Abstract: The belief revision process involves several problems considered hard. One of the crucial problems is how to represent to the knowledge base K to consider, as well as how to represent and to add new information , which may even be contradictory to the knowledge base. In this work, both the knowledge base and the new information are in conjunctive normal form. Each clause of a conjunctive normal form is encoded by a string consisting of: 0, 1, *, representing the falsifying assignments of the clause. To use the falsifying assignments of the clauses allows to perform efficiently different logical operators among conjunctive forms. Our belief revision process (K * ) between conjunctive forms is based on solving first the propositional inference, i.e. K = . Based on to count falsifying assignments represented by tertiary chains, an algorithmic proposal is made that allows to determine in a practical way, when (K E ( K * )) is inconsistent. Finally, the time-complexity analysis of our algorithmic proposal is carried out.

Posted Content
TL;DR: This paper proposed a two-step approach to partition instances of the Conjunctive Normal Form (CNF) Syntactic Formula Isomorphism problem (CSFI) into groups of different complexity.
Abstract: In this paper, we propose a two-steps approach to partition instances of the Conjunctive Normal Form (CNF) Syntactic Formula Isomorphism problem (CSFI) into groups of different complexity. First, we build a model, based on the Transformer architecture, that attempts to solve instances of the CSFI problem. Then, we leverage the errors of such model and train a second Transformer-based model to partition the problem instances into groups of different complexity, thus detecting the ones that can be solved without using too expensive resources. We evaluate the proposed approach on a pseudo-randomly generated dataset and obtain promising results. Finally, we discuss the possibility of extending this approach to other problems based on the same type of textual representation.

Posted ContentDOI
TL;DR: In this article, the authors aim to quantify the extent to which machine learning models have learned and generalized from the given data by translating a trained model into a C program and feeding it to the CBMC model checker to produce a formula in Conjunctive Normal Form.
Abstract: The efficacy of machine learning models is typically determined by computing their accuracy on test data sets. However, this may often be misleading, since the test data may not be representative of the problem that is being studied. With QuantifyML we aim to precisely quantify the extent to which machine learning models have learned and generalized from the given data. Given a trained model, QuantifyML translates it into a C program and feeds it to the CBMC model checker to produce a formula in Conjunctive Normal Form (CNF). The formula is analyzed with off-the-shelf model counters to obtain precise counts with respect to different model behavior. QuantifyML enables i) evaluating learnability by comparing the counts for the outputs to ground truth, expressed as logical predicates, ii) comparing the performance of models built with different machine learning algorithms (decision-trees vs. neural networks), and iii) quantifying the safety and robustness of models.

Journal ArticleDOI
TL;DR: In this paper, the copy complexity of Horn formulas with respect to read-once resolution was studied and a polynomial time algorithm for the problem of checking if a 2-CNF formula has a UROR was proposed.

Patent
24 Jun 2021
TL;DR: In this article, a method for execution by a query processing module includes determining a query expression indicating a query for execution, and a normalized query expression is generated by performing either the CNF conversion or the DNF conversion upon the query expression based on the conversion selection data.
Abstract: A method for execution by a query processing module includes determining a query expression indicating a query for execution. An operator tree is generated based on a nested ordering of a plurality of operators indicated by the query expression. Conjunctive normal form (CNF) conversion cost data is generated based on the operator tree, and disjunctive normal form (DNF) conversion cost data is also generated based on the operator tree. Conversion selection data is generated based on the CNF conversion cost data and the DNF conversion cost data. The conversion selection data indicates a selection to perform either a CNF conversion or a DNF conversion. A normalized query expression is generated by performing either the CNF conversion or the DNF conversion upon the query expression based on the conversion selection data. Execution of the query is facilitated in accordance with the normalized query expression.

Posted Content
TL;DR: In this article, the authors showed that for any positive integer $k, the majority-SAT problem is in polynomial time, and that the problem becomes NP-complete for all positive integers ≥ 3.
Abstract: Majority-SAT is the problem of determining whether an input $n$-variable formula in conjunctive normal form (CNF) has at least $2^{n-1}$ satisfying assignments. Majority-SAT and related problems have been studied extensively in various AI communities interested in the complexity of probabilistic planning and inference. Although Majority-SAT has been known to be PP-complete for over 40 years, the complexity of a natural variant has remained open: Majority-$k$SAT, where the input CNF formula is restricted to have clause width at most $k$. We prove that for every $k$, Majority-$k$SAT is in P. In fact, for any positive integer $k$ and rational $\rho \in (0,1)$ with bounded denominator, we give an algorithm that can determine whether a given $k$-CNF has at least $\rho \cdot 2^n$ satisfying assignments, in deterministic linear time (whereas the previous best-known algorithm ran in exponential time). Our algorithms have interesting positive implications for counting complexity and the complexity of inference, significantly reducing the known complexities of related problems such as E-MAJ-$k$SAT and MAJ-MAJ-$k$SAT. At the heart of our approach is an efficient method for solving threshold counting problems by extracting sunflowers found in the corresponding set system of a $k$-CNF. We also show that the tractability of Majority-$k$SAT is somewhat fragile. For the closely related GtMajority-SAT problem (where we ask whether a given formula has greater than $2^{n-1}$ satisfying assignments) which is known to be PP-complete, we show that GtMajority-$k$SAT is in P for $k\le 3$, but becomes NP-complete for $k\geq 4$. These results are counterintuitive, because the ``natural'' classifications of these problems would have been PP-completeness, and because there is a stark difference in the complexity of GtMajority-$k$SAT and Majority-$k$SAT for all $k\ge 4$.

Journal ArticleDOI
TL;DR: In this paper, it was shown that string avoidability over the binary alphabet can be interpreted as a version of conjunctive normal form satisfiability problem, where each clause has infinitely many shifted variants.
Abstract: The partial string avoidability problem is stated as follows: given a finite set of strings with possible “holes” (wildcard symbols), determine whether there exists a two-sided infinite string containing no substrings from this set, assuming that a hole matches every symbol. The problem is known to be NP-hard and in PSPACE, and this article establishes its PSPACE-completeness. Next, string avoidability over the binary alphabet is interpreted as a version of conjunctive normal form satisfiability problem, where each clause has infinitely many shifted variants. Non-satisfiability of these formulas can be proved using variants of classical propositional proof systems, augmented with derivation rules for shifting proof lines (such as clauses, inequalities, polynomials, etc.). First, it is proved that there is a particular formula that has a short refutation in Resolution with a shift rule but requires classical proofs of exponential size. At the same time, it is shown that exponential lower bounds for classical proof systems can be translated for their shifted versions. Finally, it is shown that superpolynomial lower bounds on the size of shifted proofs would separate NP from PSPACE; a connection to lower bounds on circuit complexity is also established.


Journal ArticleDOI
TL;DR: In this article, FourierSAT, an incomplete SAT solver based on Fourier Analysis (also known as Walsh-Fourier Transform) of Boolean functions is proposed.

Posted Content
TL;DR: In this article, a novel algorithm was introduced to synthesize quantum logic in the Conjunctive Normal Form (CNF) model, which can synthesize an m clauses n variables k-CNF with $O(k^2 m^2/n)$ quantum gates by only using three ancillary qubits.
Abstract: To demonstrate the advantage of quantum computation, many attempts have been made to attack classically intractable problems, such as the satisfiability problem (SAT), with quantum computer. To use quantum algorithms to solve these NP-hard problems, a quantum oracle with quantum circuit implementation is usually required. In this manuscript, we first introduce a novel algorithm to synthesize quantum logic in the Conjunctive Normal Form (CNF) model. Compared with linear ancillary qubits in the implementation of Qiskit open-source framework, our algorithm can synthesize an m clauses n variables k-CNF with $O(k^2 m^2/n)$ quantum gates by only using three ancillary qubits. Both the size and depth of the circuit can be further compressed with the increase in the number of ancillary qubits. When the number of ancillary qubits is $\Omega(m^\epsilon)$ (for any $\epsilon > 0$), the size of the quantum circuit given by the algorithm is O(km), which is asymptotically optimal. Furthermore, we design another algorithm to optimize the depth of the quantum circuit with only a small increase in the size of the quantum circuit. Experiments show that our algorithms have great improvement in size and depth compared with the previous algorithms.

Proceedings ArticleDOI
13 May 2021
TL;DR: In this article, the authors proposed methods and algorithms for constructing testability functions for all and a subset of lines of a combinational circuit, which can significantly reduce the computational costs for constructability functions.
Abstract: Constructing testability functions of a combinational circuit line, such as: the controllability, observability and stuck-at fault detection functions, as well as the complement of the observability function is considered. Methods and algorithms for constructing testability functions based on Binary Decision Diagram (BDD) and Disjunctive Normal Form (DNF), as well as methods for constructing Conjunctive Normal Form (CNF) and obtaining testability functions using a SAT solver are proposed. Methods and algorithms for constructing testability functions for all and a subset of lines of a circuit are also proposed. Proposed methods and algorithms make it possible to significantly reduce the computational costs for constructing testability functions of a combinational circuit.

Posted Content
TL;DR: The possibilistic Horn Non-Clausal Knowledge Bases (PNHNB) as discussed by the authors is the first polynomial non-clausal knowledge base class within possibiliistic reasoning.
Abstract: Posibilistic logic is the most extended approach to handle uncertain and partially inconsistent information. Regarding normal forms, advances in possibilistic reasoning are mostly focused on clausal form. Yet, the encoding of real-world problems usually results in a non-clausal (NC) formula and NC-to-clausal translators produce severe drawbacks that heavily limit the practical performance of clausal reasoning. Thus, by computing formulas in its original NC form, we propose several contributions showing that notable advances are also possible in possibilistic non-clausal reasoning. {\em Firstly,} we define the class of {\em Possibilistic Horn Non-Clausal Knowledge Bases,} or $\mathcal{\overline{H}}_\Sigma$, which subsumes the classes: possibilistic Horn and propositional Horn-NC. $\mathcal{\overline{H}}_\Sigma $ is shown to be a kind of NC analogous of the standard Horn class. {\em Secondly}, we define {\em Possibilistic Non-Clausal Unit-Resolution,} or $ \mathcal{UR}_\Sigma $, and prove that $ \mathcal{UR}_\Sigma $ correctly computes the inconsistency degree of $\mathcal{\overline{H}}_\Sigma $members. $\mathcal{UR}_\Sigma $ had not been proposed before and is formulated in a clausal-like manner, which eases its understanding, formal proofs and future extension towards non-clausal resolution. {\em Thirdly}, we prove that computing the inconsistency degree of $\mathcal{\overline{H}}_\Sigma $ members takes polynomial time. Although there already exist tractable classes in possibilistic logic, all of them are clausal, and thus, $\mathcal{\overline{H}}_\Sigma $ turns out to be the first characterized polynomial non-clausal class within possibilistic reasoning.

Posted Content
TL;DR: In this article, an integer program is formulated to optimally trade classification accuracy for rule simplicity, and a column generation algorithm is used to efficiently search over an exponential number of candidate clauses without the need for heuristic rule mining.
Abstract: This paper considers the learning of Boolean rules in either disjunctive normal form (DNF, OR-of-ANDs, equivalent to decision rule sets) or conjunctive normal form (CNF, AND-of-ORs) as an interpretable model for classification. An integer program is formulated to optimally trade classification accuracy for rule simplicity. We also consider the fairness setting and extend the formulation to include explicit constraints on two different measures of classification parity: equality of opportunity and equalized odds. Column generation (CG) is used to efficiently search over an exponential number of candidate clauses (conjunctions or disjunctions) without the need for heuristic rule mining. This approach also bounds the gap between the selected rule set and the best possible rule set on the training data. To handle large datasets, we propose an approximate CG algorithm using randomization. Compared to three recently proposed alternatives, the CG algorithm dominates the accuracy-simplicity trade-off in 8 out of 16 datasets. When maximized for accuracy, CG is competitive with rule learners designed for this purpose, sometimes finding significantly simpler solutions that are no less accurate. Compared to other fair and interpretable classifiers, our method is able to find rule sets that meet stricter notions of fairness with a modest trade-off in accuracy.