LogicLogic in Coq
We have now seen many examples of factual claims (propositions)
and ways of presenting evidence of their truth (proofs). In
particular, we have worked extensively with equality
propositions (e1 = e2), implications (P → Q), and quantified
propositions (∀ x, P). In this chapter, we will see how
Coq can be used to carry out other familiar forms of logical
reasoning.
Before diving into details, we should talk a bit about the status of
mathematical statements in Coq. Recall that Coq is a typed
language, which means that every sensible expression has an
associated type. Logical claims are no exception: any statement
we might try to prove in Coq has a type, namely Prop, the type
of propositions. We can see this with the Check command:
Note that all syntactically well-formed propositions have type
Prop in Coq, regardless of whether they are true.
Simply being a proposition is one thing; being provable is
a different thing!
Indeed, propositions don't just have types -- they are
first-class entities that can be manipulated in all the same ways as
any of the other things in Coq's world.
So far, we've seen one primary place that propositions can appear:
in Theorem (and Lemma and Example) declarations.
But propositions can be used in other ways. For example, we
can give a name to a proposition using a Definition, just as we
give names to other kinds of expressions.
We can later use this name in any situation where a proposition is
expected -- for example, as the claim in a Theorem declaration.
We can also write parameterized propositions -- that is,
functions that take arguments of some type and return a
proposition.
For instance, the following function takes a number
and returns a proposition asserting that this number is equal to
three:
In Coq, functions that return propositions are said to define
properties of their arguments.
For instance, here's a (polymorphic) property defining the
familiar notion of an injective function.
Definition injective {A B} (f : A → B) : Prop :=
∀ x y : A, f x = f y → x = y.
Lemma succ_inj : injective S.
Proof.
intros n m H. injection H as H1. apply H1.
Qed.
∀ x y : A, f x = f y → x = y.
Lemma succ_inj : injective S.
Proof.
intros n m H. injection H as H1. apply H1.
Qed.
The familiar equality operator = is a (binary) function that returns
a Prop.
The expression n = m is syntactic sugar for eq n m (defined in
Coq's standard library using the Notation mechanism).
Because eq can be used with elements of any type, it is also
polymorphic:
(Notice that we wrote @eq instead of eq: The type
argument A to eq is declared as implicit, and we need to turn
off the inference of this implicit argument to see the full type
of eq.)
Logical Connectives
Conjunction
To prove a conjunction, use the split tactic. This will generate
two subgoals, one for each part of the statement:
Proof.
split.
- (* 3 + 4 = 7 *) reflexivity.
- (* 2 * 2 = 4 *) reflexivity.
Qed.
split.
- (* 3 + 4 = 7 *) reflexivity.
- (* 2 * 2 = 4 *) reflexivity.
Qed.
For any propositions A and B, if we assume that A is true and
that B is true, we can conclude that A ∧ B is also true. The Coq
library provides a function conj that does this.
Since applying a theorem with hypotheses to some goal has the effect of
generating as many subgoals as there are hypotheses for that theorem,
we can apply conj to achieve the same effect as split.
Example and_example' : 3 + 4 = 7 ∧ 2 × 2 = 4.
Proof.
apply conj.
- (* 3 + 4 = 7 *) reflexivity.
- (* 2 + 2 = 4 *) reflexivity.
Qed.
Proof.
apply conj.
- (* 3 + 4 = 7 *) reflexivity.
- (* 2 + 2 = 4 *) reflexivity.
Qed.
Lemma and_example2 :
∀ n m : nat, n = 0 ∧ m = 0 → n + m = 0.
Proof.
(* WORKED IN CLASS *)
intros n m H.
destruct H as [Hn Hm].
rewrite Hn. rewrite Hm.
reflexivity.
Qed.
∀ n m : nat, n = 0 ∧ m = 0 → n + m = 0.
Proof.
(* WORKED IN CLASS *)
intros n m H.
destruct H as [Hn Hm].
rewrite Hn. rewrite Hm.
reflexivity.
Qed.
As usual, we can also destruct H right when we introduce it,
instead of introducing and then destructing it:
Lemma and_example2' :
∀ n m : nat, n = 0 ∧ m = 0 → n + m = 0.
Proof.
intros n m [Hn Hm].
rewrite Hn. rewrite Hm.
reflexivity.
Qed.
∀ n m : nat, n = 0 ∧ m = 0 → n + m = 0.
Proof.
intros n m [Hn Hm].
rewrite Hn. rewrite Hm.
reflexivity.
Qed.
You may wonder why we bothered packing the two hypotheses n = 0 and
m = 0 into a single conjunction, since we could also have stated the
theorem with two separate premises:
Lemma and_example2'' :
∀ n m : nat, n = 0 → m = 0 → n + m = 0.
Proof.
intros n m Hn Hm.
rewrite Hn. rewrite Hm.
reflexivity.
Qed.
∀ n m : nat, n = 0 → m = 0 → n + m = 0.
Proof.
intros n m Hn Hm.
rewrite Hn. rewrite Hm.
reflexivity.
Qed.
For this specific theorem, both formulations are fine. But
it's important to understand how to work with conjunctive
hypotheses because conjunctions often arise from intermediate
steps in proofs, especially in larger developments. Here's a
simple example:
Lemma and_example3 :
∀ n m : nat, n + m = 0 → n × m = 0.
Proof.
(* WORKED IN CLASS *)
intros n m H.
apply plus_is_O in H.
destruct H as [Hn Hm].
rewrite Hn. reflexivity.
Qed.
∀ n m : nat, n + m = 0 → n × m = 0.
Proof.
(* WORKED IN CLASS *)
intros n m H.
apply plus_is_O in H.
destruct H as [Hn Hm].
rewrite Hn. reflexivity.
Qed.
Another common situation is that we know A ∧ B but in some context
we need just A or just B. In such cases we can do a
destruct (possibly as part of an intros) and use an underscore
pattern _ to indicate that the unneeded conjunct should just be
thrown away.
Lemma proj1 : ∀ P Q : Prop,
P ∧ Q → P.
Proof.
intros P Q HPQ.
destruct HPQ as [HP _].
apply HP. Qed.
P ∧ Q → P.
Proof.
intros P Q HPQ.
destruct HPQ as [HP _].
apply HP. Qed.
Theorem and_commut : ∀ P Q : Prop,
P ∧ Q → Q ∧ P.
Proof.
intros P Q [HP HQ].
split.
- (* left *) apply HQ.
- (* right *) apply HP. Qed.
P ∧ Q → Q ∧ P.
Proof.
intros P Q [HP HQ].
split.
- (* left *) apply HQ.
- (* right *) apply HP. Qed.
Exercise: 1 star, standard (and_assoc)
(In the following proof of associativity, notice how the nested intros pattern breaks the hypothesis H : P ∧ (Q ∧ R) down into HP : P, HQ : Q, and HR : R. Finish the proof.)
Theorem and_assoc : ∀ P Q R : Prop,
P ∧ (Q ∧ R) → (P ∧ Q) ∧ R.
Proof.
intros P Q R [HP [HQ HR]].
(* FILL IN HERE *) Admitted.
☐
P ∧ (Q ∧ R) → (P ∧ Q) ∧ R.
Proof.
intros P Q R [HP [HQ HR]].
(* FILL IN HERE *) Admitted.
☐
Disjunction
Lemma factor_is_O:
∀ n m : nat, n = 0 ∨ m = 0 → n × m = 0.
Proof.
(* This pattern implicitly does case analysis on
n = 0 ∨ m = 0 *)
intros n m [Hn | Hm].
- (* Here, n = 0 *)
rewrite Hn. reflexivity.
- (* Here, m = 0 *)
rewrite Hm. rewrite <- mult_n_O.
reflexivity.
Qed.
∀ n m : nat, n = 0 ∨ m = 0 → n × m = 0.
Proof.
(* This pattern implicitly does case analysis on
n = 0 ∨ m = 0 *)
intros n m [Hn | Hm].
- (* Here, n = 0 *)
rewrite Hn. reflexivity.
- (* Here, m = 0 *)
rewrite Hm. rewrite <- mult_n_O.
reflexivity.
Qed.
We can see in this example that, when we perform case analysis on a
disjunction A ∨ B, we must separately satisfy two proof
obligations, each showing that the conclusion holds under a different
assumption -- A in the first subgoal and B in the second. The case analysis pattern [Hn | Hm] allows
us to name the hypotheses that are generated for the subgoals.
Conversely, to show that a disjunction holds, it suffices to show that
one of its sides holds. This can be done via the tactics left and
right. As their names imply, the first one requires proving the left
side of the disjunction, while the second requires proving the right
side. Here is a trivial use...
... and here is a slightly more interesting example requiring both
left and right:
Lemma zero_or_succ :
∀ n : nat, n = 0 ∨ n = S (pred n).
Proof.
(* WORKED IN CLASS *)
intros [|n'].
- left. reflexivity.
- right. reflexivity.
Qed.
∀ n : nat, n = 0 ∨ n = S (pred n).
Proof.
(* WORKED IN CLASS *)
intros [|n'].
- left. reflexivity.
- right. reflexivity.
Qed.
Falsehood and Negation
Up to this point, we have mostly been concerned with proving "positive" statements -- addition is commutative, appending lists is associative, etc. Of course, we are sometimes also interested in negative results, demonstrating that some given proposition is not true. Such statements are expressed with the logical negation operator ¬.Definition not (P:Prop) := P → False.
Check not : Prop → Prop.
Notation "~ x" := (not x) : type_scope.
Since False is a contradictory proposition, the principle of
explosion also applies to it. If we can get False into the context,
we can use destruct on it to complete any goal:
Theorem ex_falso_quodlibet : ∀ (P:Prop),
False → P.
Proof.
(* WORKED IN CLASS *)
intros P contra.
destruct contra. Qed.
False → P.
Proof.
(* WORKED IN CLASS *)
intros P contra.
destruct contra. Qed.
The Latin ex falso quodlibet means, literally, "from falsehood
follows whatever you like"; this is another common name for the
principle of explosion.
Hint: while getting accustomed to Coq's definition of not, you might
find it helpful to unfold not near the beginning of proofs.
Exercise: 2 stars, standard, optional (not_implies_our_not)
Show that Coq's definition of negation implies the intuitive one mentioned above.
Theorem not_implies_our_not : ∀ (P:Prop),
¬ P → (∀ (Q:Prop), P → Q).
Proof.
(* FILL IN HERE *) Admitted.
☐
¬ P → (∀ (Q:Prop), P → Q).
Proof.
(* FILL IN HERE *) Admitted.
☐
Notation "x <> y" := (~(x = y)).
The proposition 0 ≠ 1 is exactly the same as
~(0 = 1) -- that is, not (0 = 1) -- which unfolds to
(0 = 1) → False. (We use unfold not explicitly here,
to illustrate that point, but generally it can be omitted.)
To prove an inequality, we may assume the opposite
equality...
intros contra.
... and deduce a contradiction from it. Here, the
equality O = S O contradicts the disjointness of
constructors O and S, so discriminate takes care
of it.
discriminate contra.
Qed.
Qed.
It takes a little practice to get used to working with negation in Coq.
Even though you can see perfectly well why a statement involving
negation is true, it can be a little tricky at first to see how to make
Coq understand it!
Here are proofs of a few familiar facts to help get you warmed up.
Theorem not_False :
¬ False.
Proof.
unfold not. intros H. destruct H. Qed.
Theorem contradiction_implies_anything : ∀ P Q : Prop,
(P ∧ ¬P) → Q.
Proof.
(* WORKED IN CLASS *)
intros P Q [HP HNP]. unfold not in HNP.
apply HNP in HP. destruct HP. Qed.
Theorem double_neg : ∀ P : Prop,
P → ~~P.
Proof.
(* WORKED IN CLASS *)
intros P H. unfold not. intros G. apply G. apply H. Qed.
¬ False.
Proof.
unfold not. intros H. destruct H. Qed.
Theorem contradiction_implies_anything : ∀ P Q : Prop,
(P ∧ ¬P) → Q.
Proof.
(* WORKED IN CLASS *)
intros P Q [HP HNP]. unfold not in HNP.
apply HNP in HP. destruct HP. Qed.
Theorem double_neg : ∀ P : Prop,
P → ~~P.
Proof.
(* WORKED IN CLASS *)
intros P H. unfold not. intros G. apply G. apply H. Qed.
Exercise: 2 stars, advanced (double_neg_informal)
Write an informal proof of double_neg:
(* FILL IN HERE *)
(* Do not modify the following line: *)
Definition manual_grade_for_double_neg_informal : option (nat×string) := None.
☐
(* Do not modify the following line: *)
Definition manual_grade_for_double_neg_informal : option (nat×string) := None.
☐
Exercise: 1 star, advanced (not_PNP_informal)
Write an informal proof (in English) of the proposition ∀ P : Prop, ~(P ∧ ¬P).
(* FILL IN HERE *)
(* Do not modify the following line: *)
Definition manual_grade_for_not_PNP_informal : option (nat×string) := None.
☐
(* Do not modify the following line: *)
Definition manual_grade_for_not_PNP_informal : option (nat×string) := None.
☐
Exercise: 2 stars, standard (de_morgan_not_or)
De Morgan's Laws, named for Augustus De Morgan, describe how negation interacts with conjunction and disjunction. The following law says that "the negation of a disjunction is the conjunction of the negations." There is a dual law de_morgan_not_and_not to which we will return at the end of this chapter.
Theorem de_morgan_not_or : ∀ (P Q : Prop),
¬ (P ∨ Q) → ¬P ∧ ¬Q.
Proof.
(* FILL IN HERE *) Admitted.
☐
¬ (P ∨ Q) → ¬P ∧ ¬Q.
Proof.
(* FILL IN HERE *) Admitted.
☐
Theorem not_true_is_false : ∀ b : bool,
b ≠ true → b = false.
b ≠ true → b = false.
Proof.
intros b H. destruct b eqn:HE.
- (* b = true *)
unfold not in H.
apply ex_falso_quodlibet.
apply H. reflexivity.
- (* b = false *)
reflexivity.
Qed.
intros b H. destruct b eqn:HE.
- (* b = true *)
unfold not in H.
apply ex_falso_quodlibet.
apply H. reflexivity.
- (* b = false *)
reflexivity.
Qed.
Since reasoning with ex_falso_quodlibet is quite common, Coq
provides a built-in tactic, exfalso, for applying it.
Theorem not_true_is_false' : ∀ b : bool,
b ≠ true → b = false.
Proof.
intros [] H. (* note implicit destruct b here *)
- (* b = true *)
unfold not in H.
exfalso. (* <=== *)
apply H. reflexivity.
- (* b = false *) reflexivity.
Qed.
b ≠ true → b = false.
Proof.
intros [] H. (* note implicit destruct b here *)
- (* b = true *)
unfold not in H.
exfalso. (* <=== *)
apply H. reflexivity.
- (* b = false *) reflexivity.
Qed.
Truth
Unlike False, which is used extensively, True is used
relatively rarely, since it is trivial (and therefore
uninteresting) to prove as a goal, and conversely it provides no
interesting information when used as a hypothesis.
However, True can be quite useful when defining complex Props using
conditionals or as a parameter to higher-order Props. We'll come back
to this later.
For now, let's take a look at how we can use True and False to
achieve an effect similar to that of the discriminate tactic, without
literally using discriminate.
Pattern-matching lets us do different things for different
constructors. If the result of applying two different
constructors were hypothetically equal, then we could use match
to convert an unprovable statement (like False) to one that is
provable (like True).
Definition disc_fn (n: nat) : Prop :=
match n with
| O ⇒ True
| S _ ⇒ False
end.
Theorem disc_example : ∀ n, ¬ (O = S n).
Proof.
intros n contra.
assert (H : disc_fn O). { simpl. apply I. }
rewrite contra in H. simpl in H. apply H.
Qed.
match n with
| O ⇒ True
| S _ ⇒ False
end.
Theorem disc_example : ∀ n, ¬ (O = S n).
Proof.
intros n contra.
assert (H : disc_fn O). { simpl. apply I. }
rewrite contra in H. simpl in H. apply H.
Qed.
To generalize this to other constructors, we simply have to provide an
appropriate variant of disc_fn. To generalize it to other
conclusions, we can use exfalso to replace them with False.
The built-in discriminate tactic takes care of all this for us!
The handy "if and only if" connective, which asserts that two
propositions have the same truth value, is simply the conjunction
of two implications.
Logical Equivalence
Definition iff (P Q : Prop) := (P → Q) ∧ (Q → P).
Notation "P <-> Q" := (iff P Q)
(at level 95, no associativity)
: type_scope.
Theorem iff_sym : ∀ P Q : Prop,
(P ↔ Q) → (Q ↔ P).
Proof.
(* WORKED IN CLASS *)
intros P Q [HAB HBA].
split.
- (* -> *) apply HBA.
- (* <- *) apply HAB. Qed.
Lemma not_true_iff_false : ∀ b,
b ≠ true ↔ b = false.
Proof.
(* WORKED IN CLASS *)
intros b. split.
- (* -> *) apply not_true_is_false.
- (* <- *)
intros H. rewrite H. intros H'. discriminate H'.
Qed.
The apply tactic can also be used with ↔. We can use
apply on an ↔ in either direction, without explicitly thinking
about the fact that it is really an and underneath.
Lemma apply_iff_example1:
∀ P Q R : Prop, (P ↔ Q) → (Q → R) → (P → R).
Proof.
intros P Q R Hiff H HP. apply H. apply Hiff. apply HP.
Qed.
Lemma apply_iff_example2:
∀ P Q R : Prop, (P ↔ Q) → (P → R) → (Q → R).
Proof.
intros P Q R Hiff H HQ. apply H. apply Hiff. apply HQ.
Qed.
∀ P Q R : Prop, (P ↔ Q) → (Q → R) → (P → R).
Proof.
intros P Q R Hiff H HP. apply H. apply Hiff. apply HP.
Qed.
Lemma apply_iff_example2:
∀ P Q R : Prop, (P ↔ Q) → (P → R) → (Q → R).
Proof.
intros P Q R Hiff H HQ. apply H. apply Hiff. apply HQ.
Qed.
Exercise: 1 star, standard, optional (iff_properties)
Using the above proof that ↔ is symmetric (iff_sym) as a guide, prove that it is also reflexive and transitive.
Theorem iff_refl : ∀ P : Prop,
P ↔ P.
Proof.
(* FILL IN HERE *) Admitted.
Theorem iff_trans : ∀ P Q R : Prop,
(P ↔ Q) → (Q ↔ R) → (P ↔ R).
Proof.
(* FILL IN HERE *) Admitted.
☐
P ↔ P.
Proof.
(* FILL IN HERE *) Admitted.
Theorem iff_trans : ∀ P Q R : Prop,
(P ↔ Q) → (Q ↔ R) → (P ↔ R).
Proof.
(* FILL IN HERE *) Admitted.
☐
Theorem or_distributes_over_and : ∀ P Q R : Prop,
P ∨ (Q ∧ R) ↔ (P ∨ Q) ∧ (P ∨ R).
Proof.
(* FILL IN HERE *) Admitted.
☐
P ∨ (Q ∧ R) ↔ (P ∨ Q) ∧ (P ∨ R).
Proof.
(* FILL IN HERE *) Admitted.
☐
Setoids and Logical Equivalence
A "setoid" is a set equipped with an equivalence relation -- that
is, a relation that is reflexive, symmetric, and transitive. When two
elements of a set are equivalent according to the relation, rewrite
can be used to replace one by the other.
We've seen this already with the equality relation = in Coq: when
x = y, we can use rewrite to replace x with y or vice-versa.
Similarly, the logical equivalence relation ↔ is reflexive,
symmetric, and transitive, so we can use it to replace one part of a
proposition with another: if P ↔ Q, then we can use rewrite to
replace P with Q, or vice-versa.
Here is a simple example demonstrating how these tactics work with
iff.
First, let's prove a couple of basic iff equivalences.
Lemma mul_eq_0 : ∀ n m, n × m = 0 ↔ n = 0 ∨ m = 0.
Theorem or_assoc :
∀ P Q R : Prop, P ∨ (Q ∨ R) ↔ (P ∨ Q) ∨ R.
Theorem or_assoc :
∀ P Q R : Prop, P ∨ (Q ∨ R) ↔ (P ∨ Q) ∨ R.
Proof.
intros P Q R. split.
- intros [H | [H | H]].
+ left. left. apply H.
+ left. right. apply H.
+ right. apply H.
- intros [[H | H] | H].
+ left. apply H.
+ right. left. apply H.
+ right. right. apply H.
Qed.
intros P Q R. split.
- intros [H | [H | H]].
+ left. left. apply H.
+ left. right. apply H.
+ right. apply H.
- intros [[H | H] | H].
+ left. apply H.
+ right. left. apply H.
+ right. right. apply H.
Qed.
We can now use these facts with rewrite and reflexivity to
give smooth proofs of statements involving equivalences. For example,
here is a ternary version of the previous mult_0 result:
Lemma mul_eq_0_ternary :
∀ n m p, n × m × p = 0 ↔ n = 0 ∨ m = 0 ∨ p = 0.
Proof.
intros n m p.
rewrite mul_eq_0. rewrite mul_eq_0. rewrite or_assoc.
reflexivity.
Qed.
∀ n m p, n × m × p = 0 ↔ n = 0 ∨ m = 0 ∨ p = 0.
Proof.
intros n m p.
rewrite mul_eq_0. rewrite mul_eq_0. rewrite or_assoc.
reflexivity.
Qed.
Existential Quantification
Definition Even x := ∃ n : nat, x = double n.
Lemma four_is_Even : Even 4.
Proof.
unfold Even. ∃ 2. reflexivity.
Qed.
Lemma four_is_Even : Even 4.
Proof.
unfold Even. ∃ 2. reflexivity.
Qed.
Conversely, if we have an existential hypothesis ∃ x, P in
the context, we can destruct it to obtain a witness x and a
hypothesis stating that P holds of x.
Theorem exists_example_2 : ∀ n,
(∃ m, n = 4 + m) →
(∃ o, n = 2 + o).
Proof.
(* WORKED IN CLASS *)
intros n [m Hm]. (* note the implicit destruct here *)
∃ (2 + m).
apply Hm. Qed.
(∃ m, n = 4 + m) →
(∃ o, n = 2 + o).
Proof.
(* WORKED IN CLASS *)
intros n [m Hm]. (* note the implicit destruct here *)
∃ (2 + m).
apply Hm. Qed.
Exercise: 1 star, standard, especially useful (dist_not_exists)
Prove that "P holds for all x" implies "there is no x for which P does not hold." (Hint: destruct H as [x E] works on existential assumptions!)
Theorem dist_not_exists : ∀ (X:Type) (P : X → Prop),
(∀ x, P x) → ¬ (∃ x, ¬ P x).
Proof.
(* FILL IN HERE *) Admitted.
☐
(∀ x, P x) → ¬ (∃ x, ¬ P x).
Proof.
(* FILL IN HERE *) Admitted.
☐
Exercise: 2 stars, standard (dist_exists_or)
Prove that existential quantification distributes over disjunction.
Theorem dist_exists_or : ∀ (X:Type) (P Q : X → Prop),
(∃ x, P x ∨ Q x) ↔ (∃ x, P x) ∨ (∃ x, Q x).
Proof.
(* FILL IN HERE *) Admitted.
☐
(∃ x, P x ∨ Q x) ↔ (∃ x, P x) ∨ (∃ x, Q x).
Proof.
(* FILL IN HERE *) Admitted.
☐
Theorem leb_plus_exists : ∀ n m, n <=? m = true → ∃ x, m = n+x.
Proof.
(* FILL IN HERE *) Admitted.
Theorem plus_exists_leb : ∀ n m, (∃ x, m = n+x) → n <=? m = true.
Proof.
(* FILL IN HERE *) Admitted.
☐
Proof.
(* FILL IN HERE *) Admitted.
Theorem plus_exists_leb : ∀ n m, (∃ x, m = n+x) → n <=? m = true.
Proof.
(* FILL IN HERE *) Admitted.
☐
Programming with Propositions
- If l is the empty list, then x cannot occur in it, so the property "x appears in l" is simply false.
- Otherwise, l has the form x' :: l'. In this case, x occurs in l if it is equal to x' or if it occurs in l'.
Fixpoint In {A : Type} (x : A) (l : list A) : Prop :=
match l with
| [] ⇒ False
| x' :: l' ⇒ x' = x ∨ In x l'
end.
match l with
| [] ⇒ False
| x' :: l' ⇒ x' = x ∨ In x l'
end.
When In is applied to a concrete list, it expands into a
concrete sequence of nested disjunctions.
Example In_example_1 : In 4 [1; 2; 3; 4; 5].
Proof.
(* WORKED IN CLASS *)
simpl. right. right. right. left. reflexivity.
Qed.
Example In_example_2 :
∀ n, In n [2; 4] →
∃ n', n = 2 × n'.
Proof.
(* WORKED IN CLASS *)
simpl.
intros n [H | [H | []]].
- ∃ 1. rewrite <- H. reflexivity.
- ∃ 2. rewrite <- H. reflexivity.
Qed.
Proof.
(* WORKED IN CLASS *)
simpl. right. right. right. left. reflexivity.
Qed.
Example In_example_2 :
∀ n, In n [2; 4] →
∃ n', n = 2 × n'.
Proof.
(* WORKED IN CLASS *)
simpl.
intros n [H | [H | []]].
- ∃ 1. rewrite <- H. reflexivity.
- ∃ 2. rewrite <- H. reflexivity.
Qed.
(Notice the use of the empty pattern to discharge the last case
en passant.)
We can also reason about more generic statements involving In.
Theorem In_map :
∀ (A B : Type) (f : A → B) (l : list A) (x : A),
In x l →
In (f x) (map f l).
Proof.
intros A B f l x.
induction l as [|x' l' IHl'].
- (* l = nil, contradiction *)
simpl. intros [].
- (* l = x' :: l' *)
simpl. intros [H | H].
+ rewrite H. left. reflexivity.
+ right. apply IHl'. apply H.
Qed.
∀ (A B : Type) (f : A → B) (l : list A) (x : A),
In x l →
In (f x) (map f l).
Proof.
intros A B f l x.
induction l as [|x' l' IHl'].
- (* l = nil, contradiction *)
simpl. intros [].
- (* l = x' :: l' *)
simpl. intros [H | H].
+ rewrite H. left. reflexivity.
+ right. apply IHl'. apply H.
Qed.
(Note here how In starts out applied to a variable and only
gets expanded when we do case analysis on this variable.)
This way of defining propositions recursively is very convenient in
some cases, less so in others. In particular, it is subject to Coq's
usual restrictions regarding the definition of recursive functions,
e.g., the requirement that they be "obviously terminating."
In the next chapter, we will see how to define propositions
inductively -- a different technique with its own strengths and
limitations.
Exercise: 3 stars, standard (In_map_iff)
Theorem In_map_iff :
∀ (A B : Type) (f : A → B) (l : list A) (y : B),
In y (map f l) ↔
∃ x, f x = y ∧ In x l.
Proof.
intros A B f l y. split.
{ induction l as [|x l' IHl'].
(* FILL IN HERE *) Admitted.
☐
∀ (A B : Type) (f : A → B) (l : list A) (y : B),
In y (map f l) ↔
∃ x, f x = y ∧ In x l.
Proof.
intros A B f l y. split.
{ induction l as [|x l' IHl'].
(* FILL IN HERE *) Admitted.
☐
Theorem In_app_iff : ∀ A l l' (a:A),
In a (l++l') ↔ In a l ∨ In a l'.
Proof.
intros A l. induction l as [|a' l' IH].
(* FILL IN HERE *) Admitted.
☐
In a (l++l') ↔ In a l ∨ In a l'.
Proof.
intros A l. induction l as [|a' l' IH].
(* FILL IN HERE *) Admitted.
☐
Exercise: 3 stars, standard, especially useful (All)
We noted above that functions returning propositions can be seen as properties of their arguments. For instance, if P has type nat → Prop, then P n says that property P holds of n.
Fixpoint All {T : Type} (P : T → Prop) (l : list T) : Prop
(* REPLACE THIS LINE WITH ":= _your_definition_ ." *). Admitted.
Theorem All_In :
∀ T (P : T → Prop) (l : list T),
(∀ x, In x l → P x) ↔
All P l.
Proof.
(* FILL IN HERE *) Admitted.
☐
(* REPLACE THIS LINE WITH ":= _your_definition_ ." *). Admitted.
Theorem All_In :
∀ T (P : T → Prop) (l : list T),
(∀ x, In x l → P x) ↔
All P l.
Proof.
(* FILL IN HERE *) Admitted.
☐
Exercise: 2 stars, standard, optional (combine_odd_even)
Complete the definition of combine_odd_even below. It takes as arguments two properties of numbers, Podd and Peven, and it should return a property P such that P n is equivalent to Podd n when n is odd and equivalent to Peven n otherwise.
Definition combine_odd_even (Podd Peven : nat → Prop) : nat → Prop
(* REPLACE THIS LINE WITH ":= _your_definition_ ." *). Admitted.
(* REPLACE THIS LINE WITH ":= _your_definition_ ." *). Admitted.
To test your definition, prove the following facts:
Theorem combine_odd_even_intro :
∀ (Podd Peven : nat → Prop) (n : nat),
(odd n = true → Podd n) →
(odd n = false → Peven n) →
combine_odd_even Podd Peven n.
Proof.
(* FILL IN HERE *) Admitted.
Theorem combine_odd_even_elim_odd :
∀ (Podd Peven : nat → Prop) (n : nat),
combine_odd_even Podd Peven n →
odd n = true →
Podd n.
Proof.
(* FILL IN HERE *) Admitted.
Theorem combine_odd_even_elim_even :
∀ (Podd Peven : nat → Prop) (n : nat),
combine_odd_even Podd Peven n →
odd n = false →
Peven n.
Proof.
(* FILL IN HERE *) Admitted.
☐
∀ (Podd Peven : nat → Prop) (n : nat),
(odd n = true → Podd n) →
(odd n = false → Peven n) →
combine_odd_even Podd Peven n.
Proof.
(* FILL IN HERE *) Admitted.
Theorem combine_odd_even_elim_odd :
∀ (Podd Peven : nat → Prop) (n : nat),
combine_odd_even Podd Peven n →
odd n = true →
Podd n.
Proof.
(* FILL IN HERE *) Admitted.
Theorem combine_odd_even_elim_even :
∀ (Podd Peven : nat → Prop) (n : nat),
combine_odd_even Podd Peven n →
odd n = false →
Peven n.
Proof.
(* FILL IN HERE *) Admitted.
☐
Applying Theorems to Arguments
Check plus : nat → nat → nat.
Check @rev : ∀ X, list X → list X.
Check add_comm : ∀ n m : nat, n + m = m + n.
Check @rev : ∀ X, list X → list X.
Check add_comm : ∀ n m : nat, n + m = m + n.
Coq checks the statement of the add_comm theorem (or prints
it for us, if we leave off the part beginning with the colon) in
the same way that it checks the type of any term (e.g., plus)
that we ask it to Check.
Why?
The reason is that the identifier add_comm actually refers to a
proof object -- a logical derivation establishing of the truth of the
statement ∀ n m : nat, n + m = m + n. The type of this object
is the proposition that it is a proof of.
Intuitively, this makes sense because the statement of a
theorem tells us what we can use that theorem for.
Operationally, this analogy goes even further: by applying a
theorem as if it were a function, i.e., applying it to values and
hypotheses with matching types, we can specialize its result
without having to resort to intermediate assertions. For example,
suppose we wanted to prove the following result:
It appears at first sight that we ought to be able to prove this by
rewriting with add_comm twice to make the two sides match. The
problem is that the second rewrite will undo the effect of the
first.
Proof.
intros x y z.
rewrite add_comm.
rewrite add_comm.
(* We are back where we started... *)
Abort.
intros x y z.
rewrite add_comm.
rewrite add_comm.
(* We are back where we started... *)
Abort.
We encountered similar issues back in Induction, and we saw
one way to work around them by using assert to derive a specialized
version of add_comm that can be used to rewrite exactly where we
want.
Lemma add_comm3_take2 :
∀ x y z, x + (y + z) = (z + y) + x.
Proof.
intros x y z.
rewrite add_comm.
assert (H : y + z = z + y).
{ rewrite add_comm. reflexivity. }
rewrite H.
reflexivity.
Qed.
∀ x y z, x + (y + z) = (z + y) + x.
Proof.
intros x y z.
rewrite add_comm.
assert (H : y + z = z + y).
{ rewrite add_comm. reflexivity. }
rewrite H.
reflexivity.
Qed.
A more elegant alternative is to apply add_comm directly
to the arguments we want to instantiate it with, in much the same
way as we apply a polymorphic function to a type argument.
Lemma add_comm3_take3 :
∀ x y z, x + (y + z) = (z + y) + x.
Proof.
intros x y z.
rewrite add_comm.
rewrite (add_comm y z).
reflexivity.
Qed.
∀ x y z, x + (y + z) = (z + y) + x.
Proof.
intros x y z.
rewrite add_comm.
rewrite (add_comm y z).
reflexivity.
Qed.
Here's another example of using a theorem like a function.
The following theorem says: if a list l contains some element x,
then l must be nonempty.
What makes this interesting is that one quantified variable
(x) does not appear in the conclusion (l ≠ []).
Intuitively, we should be able to use this theorem to prove the special
case where x is 42. However, simply invoking the tactic apply
in_not_nil will fail because it cannot infer the value of x.
Lemma in_not_nil_42 :
∀ l : list nat, In 42 l → l ≠ [].
Proof.
intros l H.
Fail apply in_not_nil.
Abort.
∀ l : list nat, In 42 l → l ≠ [].
Proof.
intros l H.
Fail apply in_not_nil.
Abort.
There are several ways to work around this:
Use apply ... with ...
Lemma in_not_nil_42_take2 :
∀ l : list nat, In 42 l → l ≠ [].
Proof.
intros l H.
apply in_not_nil with (x := 42).
apply H.
Qed.
∀ l : list nat, In 42 l → l ≠ [].
Proof.
intros l H.
apply in_not_nil with (x := 42).
apply H.
Qed.
Use apply ... in ...
Lemma in_not_nil_42_take3 :
∀ l : list nat, In 42 l → l ≠ [].
Proof.
intros l H.
apply in_not_nil in H.
apply H.
Qed.
∀ l : list nat, In 42 l → l ≠ [].
Proof.
intros l H.
apply in_not_nil in H.
apply H.
Qed.
Explicitly apply the lemma to the value for x.
Lemma in_not_nil_42_take4 :
∀ l : list nat, In 42 l → l ≠ [].
Proof.
intros l H.
apply (in_not_nil nat 42).
apply H.
Qed.
∀ l : list nat, In 42 l → l ≠ [].
Proof.
intros l H.
apply (in_not_nil nat 42).
apply H.
Qed.
Explicitly apply the lemma to a hypothesis (causing the values of the
other parameters to be inferred).
Lemma in_not_nil_42_take5 :
∀ l : list nat, In 42 l → l ≠ [].
Proof.
intros l H.
apply (in_not_nil _ _ _ H).
Qed.
∀ l : list nat, In 42 l → l ≠ [].
Proof.
intros l H.
apply (in_not_nil _ _ _ H).
Qed.
You can "use a theorem as a function" in this way with almost any
tactic that can take a theorem's name as an argument.
Note, also, that theorem application uses the same inference
mechanisms as function application; thus, it is possible, for
example, to supply wildcards as arguments to be inferred, or to
declare some hypotheses to a theorem as implicit by default.
These features are illustrated in the proof below. (The details of
how this proof works are not critical -- the goal here is just to
illustrate applying theorems to arguments.)
Example lemma_application_ex :
∀ {n : nat} {ns : list nat},
In n (map (fun m ⇒ m × 0) ns) →
n = 0.
Proof.
intros n ns H.
destruct (proj1 _ _ (In_map_iff _ _ _ _ _) H)
as [m [Hm _]].
rewrite mul_0_r in Hm. rewrite <- Hm. reflexivity.
Qed.
∀ {n : nat} {ns : list nat},
In n (map (fun m ⇒ m × 0) ns) →
n = 0.
Proof.
intros n ns H.
destruct (proj1 _ _ (In_map_iff _ _ _ _ _) H)
as [m [Hm _]].
rewrite mul_0_r in Hm. rewrite <- Hm. reflexivity.
Qed.
We will see many more examples in later chapters.
Working with Decidable Properties
bool Prop ==== ==== decidable? yes no useable with match? yes no works with rewrite tactic? no yes
... or that there exists some k such that n = double k.
Of course, it would be pretty strange if these two
characterizations of evenness did not describe the same set of
natural numbers! Fortunately, we can prove that they do...
We first need two helper lemmas.
Lemma even_double : ∀ k, even (double k) = true.
Proof.
intros k. induction k as [|k' IHk'].
- reflexivity.
- simpl. apply IHk'.
Qed.
intros k. induction k as [|k' IHk'].
- reflexivity.
- simpl. apply IHk'.
Qed.
Lemma even_double_conv : ∀ n, ∃ k,
n = if even n then double k else S (double k).
n = if even n then double k else S (double k).
Proof.
(* Hint: Use the even_S lemma from Induction.v. *)
(* FILL IN HERE *) Admitted.
☐
(* Hint: Use the even_S lemma from Induction.v. *)
(* FILL IN HERE *) Admitted.
☐
Now the main theorem:
Theorem even_bool_prop : ∀ n,
even n = true ↔ Even n.
even n = true ↔ Even n.
Proof.
intros n. split.
- intros H. destruct (even_double_conv n) as [k Hk].
rewrite Hk. rewrite H. ∃ k. reflexivity.
- intros [k Hk]. rewrite Hk. apply even_double.
Qed.
intros n. split.
- intros H. destruct (even_double_conv n) as [k Hk].
rewrite Hk. rewrite H. ∃ k. reflexivity.
- intros [k Hk]. rewrite Hk. apply even_double.
Qed.
In view of this theorem, we can say that the boolean computation
even n is reflected in the truth of the proposition
∃ k, n = double k.
Similarly, to state that two numbers n and m are equal, we can
say either
- (1) that n =? m returns true, or
- (2) that n = m.
Even when the boolean and propositional formulations of a claim are
interchangeable from a purely logical perspective, it can be more
convenient to use one over the other.
For example, there is no effective way to test whether or not a
Prop is true in a function definition; as a consequence, the
following definition is rejected:
Coq complains that n = 2 has type Prop, while it expects an
element of bool (or some other inductive type with two elements).
This has to do with the computational nature of Coq's core language,
which is designed so that every function it can express is computable
and total. One reason for this is to allow the extraction of
executable programs from Coq developments. As a consequence, Prop in
Coq does not have a universal case analysis operation telling whether
any given proposition is true or false, since such an operation would
allow us to write non-computable functions.
Rather, we have to state this definition using a boolean equality
test.
Beyond the fact that non-computable properties are impossible in
general to phrase as boolean computations, even many computable
properties are easier to express using Prop than bool, since
recursive function definitions in Coq are subject to significant
restrictions. For instance, the next chapter shows how to define the
property that a regular expression matches a given string using Prop.
Doing the same with bool would amount to writing a regular expression
matching algorithm, which would be more complicated, harder to
understand, and harder to reason about than a simple (non-algorithmic)
definition of this property.
Conversely, an important side benefit of stating facts using booleans
is enabling some proof automation through computation with Coq terms, a
technique known as proof by reflection.
Consider the following statement:
The most direct way to prove this is to give the value of k
explicitly.
The proof of the corresponding boolean statement is simpler, because we
don't have to invent the witness 500: Coq's computation mechanism
does it for us!
Now, the useful observation is that, since the two notions are
equivalent, we can use the boolean formulation to prove the other one
without mentioning the value 500 explicitly:
Although we haven't gained much in terms of proof-script
line count in this case, larger proofs can often be made considerably
simpler by the use of reflection. As an extreme example, a famous
Coq proof of the even more famous 4-color theorem uses
reflection to reduce the analysis of hundreds of different cases
to a boolean computation.
Another advantage of booleans is that the negation of a "boolean fact"
is straightforward to state and prove: simply flip the expected boolean
result.
In contrast, propositional negation can be difficult to work with
directly.
For example, suppose we state the non-evenness of 1001
propositionally:
Proving this directly -- by assuming that there is some n such that
1001 = double n and then somehow reasoning to a contradiction --
would be rather complicated.
But if we convert it to a claim about the boolean even function, we
can let Coq do the work for us.
Proof.
(* WORKED IN CLASS *)
rewrite <- even_bool_prop.
unfold not.
simpl.
intro H.
discriminate H.
Qed.
(* WORKED IN CLASS *)
rewrite <- even_bool_prop.
unfold not.
simpl.
intro H.
discriminate H.
Qed.
Conversely, there are complementary situations where it can be easier
to work with propositions rather than booleans.
In particular, knowing that (n =? m) = true is generally of little
direct help in the middle of a proof involving n and m, but if we
convert the statement to the equivalent form n = m, we can rewrite
with it.
Lemma plus_eqb_example : ∀ n m p : nat,
n =? m = true → n + p =? m + p = true.
Proof.
(* WORKED IN CLASS *)
intros n m p H.
rewrite eqb_eq in H.
rewrite H.
rewrite eqb_eq.
reflexivity.
Qed.
n =? m = true → n + p =? m + p = true.
Proof.
(* WORKED IN CLASS *)
intros n m p H.
rewrite eqb_eq in H.
rewrite H.
rewrite eqb_eq.
reflexivity.
Qed.
We won't discuss reflection any further for the moment, but it serves
as a good example showing the different strengths of booleans and
general propositions; we will return to it in later chaptersbeing able
to cross back and forth between the boolean and propositional worlds
will often be convenient.
Exercise: 2 stars, standard (logical_connectives)
The following theorems relate the propositional connectives studied in this chapter to the corresponding boolean operations.
Theorem andb_true_iff : ∀ b1 b2:bool,
b1 && b2 = true ↔ b1 = true ∧ b2 = true.
Proof.
(* FILL IN HERE *) Admitted.
Theorem orb_true_iff : ∀ b1 b2,
b1 || b2 = true ↔ b1 = true ∨ b2 = true.
Proof.
(* FILL IN HERE *) Admitted.
☐
b1 && b2 = true ↔ b1 = true ∧ b2 = true.
Proof.
(* FILL IN HERE *) Admitted.
Theorem orb_true_iff : ∀ b1 b2,
b1 || b2 = true ↔ b1 = true ∨ b2 = true.
Proof.
(* FILL IN HERE *) Admitted.
☐
Exercise: 1 star, standard (eqb_neq)
The following theorem is an alternate "negative" formulation of eqb_eq that is more convenient in certain situations. (We'll see examples in later chapters.) Hint: not_true_iff_false.Exercise: 3 stars, standard (eqb_list)
Given a boolean operator eqb for testing equality of elements of some type A, we can define a function eqb_list for testing equality of lists with elements in A. Complete the definition of the eqb_list function below. To make sure that your definition is correct, prove the lemma eqb_list_true_iff.
Fixpoint eqb_list {A : Type} (eqb : A → A → bool)
(l1 l2 : list A) : bool
(* REPLACE THIS LINE WITH ":= _your_definition_ ." *). Admitted.
Theorem eqb_list_true_iff :
∀ A (eqb : A → A → bool),
(∀ a1 a2, eqb a1 a2 = true ↔ a1 = a2) →
∀ l1 l2, eqb_list eqb l1 l2 = true ↔ l1 = l2.
Proof.
(* FILL IN HERE *) Admitted.
☐
(l1 l2 : list A) : bool
(* REPLACE THIS LINE WITH ":= _your_definition_ ." *). Admitted.
Theorem eqb_list_true_iff :
∀ A (eqb : A → A → bool),
(∀ a1 a2, eqb a1 a2 = true ↔ a1 = a2) →
∀ l1 l2, eqb_list eqb l1 l2 = true ↔ l1 = l2.
Proof.
(* FILL IN HERE *) Admitted.
☐
Exercise: 2 stars, standard, especially useful (All_forallb)
Prove the theorem below, which relates forallb, from the exercise forall_exists_challenge in chapter Tactics, to the All property defined above.
Fixpoint forallb {X : Type} (test : X → bool) (l : list X) : bool
(* REPLACE THIS LINE WITH ":= _your_definition_ ." *). Admitted.
Theorem forallb_true_iff : ∀ X test (l : list X),
forallb test l = true ↔ All (fun x ⇒ test x = true) l.
Proof.
(* FILL IN HERE *) Admitted.
(* REPLACE THIS LINE WITH ":= _your_definition_ ." *). Admitted.
Theorem forallb_true_iff : ∀ X test (l : list X),
forallb test l = true ↔ All (fun x ⇒ test x = true) l.
Proof.
(* FILL IN HERE *) Admitted.
(Ungraded thought question) Are there any important properties of
the function forallb which are not captured by this
specification?
(* FILL IN HERE *)
☐
☐
The Logic of Coq
Functional Extensionality
These two functions are equal just by simplification, but in general
functions can be equal for more interesting reasons.
In common mathematical practice, two functions f and g are
considered equal if they produce the same output on every input:
(∀ x, f x = g x) → f = g This is known as the principle of functional extensionality.
(Informally, an "extensional property" is one that pertains to an
object's observable behavior. Thus, functional extensionality
simply means that a function's identity is completely determined
by what we can observe from it -- i.e., the results we obtain
after applying it.)
However, functional extensionality is not part of Coq's built-in logic.
This means that some intuitively obvious propositions are not
provable.
(∀ x, f x = g x) → f = g This is known as the principle of functional extensionality.
However, if we like, we can add functional extensionality to Coq's
core using the Axiom command.
Defining something as an Axiom has the same effect as stating a
theorem and skipping its proof using Admitted, but it alerts the
reader that this isn't just something we're going to come back and
fill in later!
We can now invoke functional extensionality in proofs:
Example function_equality_ex2 :
(fun x ⇒ plus x 1) = (fun x ⇒ plus 1 x).
Proof.
apply functional_extensionality. intros x.
apply add_comm.
Qed.
(fun x ⇒ plus x 1) = (fun x ⇒ plus 1 x).
Proof.
apply functional_extensionality. intros x.
apply add_comm.
Qed.
Naturally, we need to be quite careful when adding new axioms into
Coq's logic, as this can render it inconsistent -- that is, it may
become possible to prove every proposition, including False, 2+2=5,
etc.!
In general, there is no simple way of telling whether an axiom is safe
to add: hard work by highly trained mathematicians is often required to
establish the consistency of any particular combination of axioms.
Fortunately, it is known that adding functional extensionality, in
particular, is consistent.
To check whether a particular proof relies on any additional
axioms, use the Print Assumptions command:
Print Assumptions function_equality_ex2
Print Assumptions function_equality_ex2
(* ===>
Axioms:
functional_extensionality :
forall (X Y : Type) (f g : X -> Y),
(forall x : X, f x = g x) -> f = g *)
Axioms:
functional_extensionality :
forall (X Y : Type) (f g : X -> Y),
(forall x : X, f x = g x) -> f = g *)
(If you try this yourself, you may also see add_comm listed as
an assumption, depending on whether the copy of Tactics.v in the
local directory has the proof of add_comm filled in.)
We can improve this with the following two-argument definition:
Exercise: 4 stars, standard (tr_rev_correct)
One problem with the definition of the list-reversing function rev that we have is that it performs a call to app on each step. Running app takes time asymptotically linear in the size of the list, which means that rev is asymptotically quadratic.
Fixpoint rev_append {X} (l1 l2 : list X) : list X :=
match l1 with
| [] ⇒ l2
| x :: l1' ⇒ rev_append l1' (x :: l2)
end.
Definition tr_rev {X} (l : list X) : list X :=
rev_append l [].
match l1 with
| [] ⇒ l2
| x :: l1' ⇒ rev_append l1' (x :: l2)
end.
Definition tr_rev {X} (l : list X) : list X :=
rev_append l [].
This version of rev is said to be tail-recursive, because the
recursive call to the function is the last operation that needs to be
performed (i.e., we don't have to execute ++ after the recursive
call); a decent compiler will generate very efficient code in this
case.
Prove that the two definitions are indeed equivalent.
Classical vs. Constructive Logic
To understand operationally why this is the case, recall
that, to prove a statement of the form P ∨ Q, we use the left
and right tactics, which effectively require knowing which side
of the disjunction holds. But the universally quantified P in
excluded_middle is an arbitrary proposition, which we know
nothing about. We don't have enough information to choose which
of left or right to apply, just as Coq doesn't have enough
information to mechanically decide whether P holds or not inside
a function.
In the special case where we happen to know that P is reflected in
some boolean term b, knowing whether it holds or not is trivial: we
just have to check the value of b.
Theorem restricted_excluded_middle : ∀ P b,
(P ↔ b = true) → P ∨ ¬ P.
Proof.
intros P [] H.
- left. rewrite H. reflexivity.
- right. rewrite H. intros contra. discriminate contra.
Qed.
(P ↔ b = true) → P ∨ ¬ P.
Proof.
intros P [] H.
- left. rewrite H. reflexivity.
- right. rewrite H. intros contra. discriminate contra.
Qed.
In particular, the excluded middle is valid for equations n = m,
between natural numbers n and m.
Theorem restricted_excluded_middle_eq : ∀ (n m : nat),
n = m ∨ n ≠ m.
Proof.
intros n m.
apply (restricted_excluded_middle (n = m) (n =? m)).
symmetry.
apply eqb_eq.
Qed.
n = m ∨ n ≠ m.
Proof.
intros n m.
apply (restricted_excluded_middle (n = m) (n =? m)).
symmetry.
apply eqb_eq.
Qed.
Sadly, this trick only works for decidable propositions.
It may seem strange that the general excluded middle is not
available by default in Coq, since it is a standard feature of familiar
logics like ZFC. But there is a distinct advantage in not assuming
the excluded middle: statements in Coq make stronger claims than the
analogous statements in standard mathematics. Notably, a Coq proof of
∃ x, P x always includes a particular value of x for which we
can prove P x -- in other words, every proof of existence is
constructive.
Logics like Coq's, which do not assume the excluded middle, are
referred to as constructive logics.
More conventional logical systems such as ZFC, in which the
excluded middle does hold for arbitrary propositions, are referred
to as classical.
The following example illustrates why assuming the excluded middle may
lead to non-constructive proofs:
Claim: There exist irrational numbers a and b such that a ^
b (a to the power b) is rational.
Proof: It is not difficult to show that sqrt 2 is irrational. If
sqrt 2 ^ sqrt 2 is rational, it suffices to take a = b = sqrt 2 and
we are done. Otherwise, sqrt 2 ^ sqrt 2 is irrational. In this
case, we can take a = sqrt 2 ^ sqrt 2 and b = sqrt 2, since a ^ b
= sqrt 2 ^ (sqrt 2 × sqrt 2) = sqrt 2 ^ 2 = 2. ☐
Do you see what happened here? We used the excluded middle to consider
separately the cases where sqrt 2 ^ sqrt 2 is rational and where it
is not, without knowing which one actually holds! Because of this, we
finish the proof knowing that such a and b exist, but not knowing
their actual values.
As useful as constructive logic is, it does have its limitations: There
are many statements that can easily be proven in classical logic but
that have only much more complicated constructive proofs, and there are
some that are known to have no constructive proof at all! Fortunately,
like functional extensionality, the excluded middle is known to be
compatible with Coq's logic, allowing us to add it safely as an axiom.
However, we will not need to do so here: the results that we cover can
be developed entirely within constructive logic at negligible extra
cost.
It takes some practice to understand which proof techniques must be
avoided in constructive reasoning, but arguments by contradiction, in
particular, are infamous for leading to non-constructive proofs.
Here's a typical example: suppose that we want to show that there
exists x with some property P, i.e., such that P x. We start by
assuming that our conclusion is false; that is, ¬ ∃ x, P x. From
this premise, it is not hard to derive ∀ x, ¬ P x. If we manage
to show that this intermediate fact results in a contradiction, we
arrive at an existence proof without ever exhibiting a value of x for
which P x holds!
The technical flaw here, from a constructive standpoint, is that we
claimed to prove ∃ x, P x using a proof of ¬ ¬ (∃ x, P x).
Allowing ourselves to remove double negations from arbitrary
statements is equivalent to assuming the excluded middle law, as shown
in one of the exercises below. Thus, this line of reasoning cannot be
encoded in Coq without assuming additional axioms.
Succinctly: for any proposition P,
Coq is consistent ==> (Coq + P ∨ ¬P) is consistent.
Exercise: 3 stars, standard (excluded_middle_irrefutable)
Proving the consistency of Coq with the general excluded middle axiom requires complicated reasoning that cannot be carried out within Coq itself. However, the following theorem implies that it is always safe to assume a decidability axiom (i.e., an instance of excluded middle) for any particular Prop P. Why? Because the negation of such an axiom leads to a contradiction. If ¬ (P ∨ ¬P) were provable, then by de_morgan_not_or as proved above, P ∧ ¬P would be provable, which would be a contradiction. So, it is safe to add P ∨ ¬P as an axiom for any particular P.
Theorem excluded_middle_irrefutable: ∀ (P : Prop),
¬ ¬ (P ∨ ¬ P).
Proof.
(* FILL IN HERE *) Admitted.
☐
¬ ¬ (P ∨ ¬ P).
Proof.
(* FILL IN HERE *) Admitted.
☐
Exercise: 3 stars, advanced (not_exists_dist)
It is a theorem of classical logic that the following two assertions are equivalent:¬(∃ x, ¬P x)
∀ x, P x The dist_not_exists theorem above proves one side of this equivalence. Interestingly, the other direction cannot be proved in constructive logic. Your job is to show that it is implied by the excluded middle.
Theorem not_exists_dist :
excluded_middle →
∀ (X:Type) (P : X → Prop),
¬ (∃ x, ¬ P x) → (∀ x, P x).
Proof.
(* FILL IN HERE *) Admitted.
☐
excluded_middle →
∀ (X:Type) (P : X → Prop),
¬ (∃ x, ¬ P x) → (∀ x, P x).
Proof.
(* FILL IN HERE *) Admitted.
☐
Exercise: 5 stars, standard, optional (classical_axioms)
For those who like a challenge, here is an exercise adapted from the Coq'Art book by Bertot and Casteran (p. 123). Each of the following five statements, together with excluded_middle, can be considered as characterizing classical logic. We can't prove any of them in Coq, but we can consistently add any one of them as an axiom if we wish to work in classical logic.
Definition peirce := ∀ P Q: Prop,
((P → Q) → P) → P.
Definition double_negation_elimination := ∀ P:Prop,
~~P → P.
Definition de_morgan_not_and_not := ∀ P Q:Prop,
~(~P ∧ ¬Q) → P ∨ Q.
Definition implies_to_or := ∀ P Q:Prop,
(P → Q) → (¬P ∨ Q).
Definition consequentia_mirabilis := ∀ P:Prop,
(¬P → P) → P.
(* FILL IN HERE *)
☐
((P → Q) → P) → P.
Definition double_negation_elimination := ∀ P:Prop,
~~P → P.
Definition de_morgan_not_and_not := ∀ P Q:Prop,
~(~P ∧ ¬Q) → P ∨ Q.
Definition implies_to_or := ∀ P Q:Prop,
(P → Q) → (¬P ∨ Q).
Definition consequentia_mirabilis := ∀ P:Prop,
(¬P → P) → P.
(* FILL IN HERE *)
☐
(* 2024-11-04 20:34 *)