Relational Proofs Formal proofs in Relational Logic are analogous to formal proofs in Propositional Logic. The major difference is that there are additional.

Slides:



Advertisements
Similar presentations
Artificial Intelligence
Advertisements

Inference Rules Universal Instantiation Existential Generalization
Resolution.
Inference and Reasoning. Basic Idea Given a set of statements, does a new statement logically follow from this. For example If an animal has wings and.
For Friday No reading Homework: –Chapter 9, exercise 4 (This is VERY short – do it while you’re running your tests) Make sure you keep variables and constants.
Deduction In addition to being able to represent facts, or real- world statements, as formulas, we want to be able to manipulate facts, e.g., derive new.
© by Kenneth H. Rosen, Discrete Mathematics & its Applications, Sixth Edition, Mc Graw-Hill, 2007 Chapter 1: (Part 2): The Foundations: Logic and Proofs.
F22H1 Logic and Proof Week 7 Clausal Form and Resolution.
Predicate Calculus Formal Methods in Verification of Computer Systems Jeremy Johnson.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Let remember from the previous lesson what is Knowledge representation
Proofs, Recursion and Analysis of Algorithms Mathematical Structures for Computer Science Chapter 2.1 Copyright © 2006 W.H. Freeman & Co.MSCS SlidesProofs,
Proof by Deduction. Deductions and Formal Proofs A deduction is a sequence of logic statements, each of which is known or assumed to be true A formal.
Formal Logic Mathematical Structures for Computer Science Chapter 1 Copyright © 2006 W.H. Freeman & Co.MSCS SlidesFormal Logic.
CSE115/ENGR160 Discrete Mathematics 01/20/11 Ming-Hsuan Yang UC Merced 1.
Logical and Rule-Based Reasoning Part I. Logical Models and Reasoning Big Question: Do people think logically?
First Order Logic. This Lecture Last time we talked about propositional logic, a logic on simple statements. This time we will talk about first order.
Formal Logic Mathematical Structures for Computer Science Chapter Copyright © 2006 W.H. Freeman & Co.MSCS SlidesFormal Logic.
2.3Logical Implication: Rules of Inference From the notion of a valid argument, we begin a formal study of what we shall mean by an argument and when such.
Chapter 1, Part II: Predicate Logic With Question/Answer Animations.
1 Introduction to Abstract Mathematics Chapter 2: The Logic of Quantified Statements. Predicate Calculus Instructor: Hayk Melikya 2.3.
1 Introduction to Abstract Mathematics Proofs in Predicate Logic , , ~, ,  Instructor: Hayk Melikya Purpose of Section: The theorems.
Chapter 2 Logic 2.1 Statements 2.2 The Negation of a Statement 2.3 The Disjunction and Conjunction of Statements 2.4 The Implication 2.5 More on Implications.
For Wednesday Read chapter 9, sections 1-3 Homework: –Chapter 7, exercises 8 and 9.
© Copyright 2008 STI INNSBRUCK Intelligent Systems Propositional Logic.
CS104:Discrete Structures Chapter 2: Proof Techniques.
1 Outline Quantifiers and predicates Translation of English sentences Predicate formulas with single variable Predicate formulas involving multiple variables.
Chapter Nine Predicate Logic Proofs. 1. Proving Validity The eighteen valid argument forms plus CP and IP that are the proof machinery of sentential logic.
2004/9/15fuzzy set theory chap02.ppt1 Classical Logic the forms of correct reasoning - formal logic.
1 Section 7.3 Formal Proofs in Predicate Calculus All proof rules for propositional calculus extend to predicate calculus. Example. … k.  x p(x) P k+1.
Discrete Mathematical Structures: Theory and Applications 1 Logic: Learning Objectives  Learn about statements (propositions)  Learn how to use logical.
1 Section 7.1 First-Order Predicate Calculus Predicate calculus studies the internal structure of sentences where subjects are applied to predicates existentially.
Propositional Logic. Assignment Write any five rules each from two games which you like by using propositional logic notations.
Predicate Calculus CS 270 Math Foundations of Computer Science Jeremy Johnson Presentation uses material from Huth and Ryan, Logic in Computer Science:
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
Chapter 1 Logic and Proof.
Chapter 7. Propositional and Predicate Logic
CSE15 Discrete Mathematics 01/23/17
Lecture 1 – Formal Logic.
Rationale Behind the Precise Formulation of the Four Quantifier Rules
{P} ⊦ Q if and only if {P} ╞ Q
EA C461 – Artificial Intelligence Logical Agent
The Foundations: Logic and Proofs
CS201: Data Structures and Discrete Mathematics I
CS 1502 Formal Methods in Computer Science
CHAPTER 1: LOGICS AND PROOF
Logical Agents Chapter 7.
First Order Logic Rosen Lecture 3: Sept 11, 12.
Relational Proofs Computational Logic Lecture 8
MA/CSSE 474 More Math Review Theory of Computation
Computer Security: Art and Science, 2nd Edition
Logical Agents Chapter 7.
Relational Proofs Computational Logic Lecture 7
Logical Entailment Computational Logic Lecture 3
Chapter 7. Propositional and Predicate Logic
Predicates and Quantifiers
TRUTH TABLES.
Propositional Logic CMSC 471 Chapter , 7.7 and Chuck Dyer
Copyright © Cengage Learning. All rights reserved.
CS201: Data Structures and Discrete Mathematics I
ece 720 intelligent web: ontology and beyond
Relational Proofs Computational Logic Lecture 7
Logical and Rule-Based Reasoning Part I
Relational Proofs Computational Logic Lecture 7
Herbrand Semantics Computational Logic Lecture 15
Introducing Natural Deduction
Substitution.
Subderivations.
Rules of inference Section 1.5 Monday, December 02, 2019
Presentation transcript:

Relational Proofs Formal proofs in Relational Logic are analogous to formal proofs in Propositional Logic. The major difference is that there are additional mechanisms to deal with quantified sentences.

Logical Entailment A set of premises Δ logically entails a conclusion ϕ (Δ |= ϕ) if and only if every interpretation that satisfies Δ also satisfies ϕ. Logical entailment for Herbrand Logic is defined the same as for Propositional Logic. A set of premises logically entails a conclusion if and only if every truth assignment that satisfies the premises also satisfies the conclusions.

Propositional Truth Assignments In the case of Propositional Logic, we can check logical entailment by examining a truth table for the language. With finitely many proposition constants, the truth is large but finite. With n constants, there are 2n truth assignments.

Infinitely many interpretations! Truth Assignments p(a) p(f(a)) p(f(f(a))) … 1 1 1 … 1 1 0 … 1 0 1 … 1 0 0 … 0 1 1 … 0 1 0 … 0 0 1 … 0 0 0 … … … … … Infinitely many interpretations! For Herbrand Logic, things are not so easy. It is possible to have Herbrand bases of infinite size, e.g. when the language contains function constants. In such cases, truth assignments are infinitely large and there are infinitely many of them, making it impossible to check logical entailment using truth tables.

Good News Given a few restrictions (described later), we have good news. Good News: If Δ logically entails ϕ, then there is a finite proof of ϕ from Δ. And vice versa. More Good News: If Δ logically entails ϕ, it is possible to find such a proof in finite time. All is not lost. As with Propositional Logic, we can define a proof system for Herbrand Logic. Interestingly, it is possible to show that, with a few simple restrictions, a set of premises logically entails a conclusions if and only if there is a finite proof of the conclusion from the premises, even when the Herbrand base is infinite. Moreover, it is possible to find such proofs in a finite time.

Programme Proof System Examples, Examples, Examples In this lesson, we start by defining a proof system for Herbrand Logic based on Fitch. We then illustrate the system with a few examples. Finally, we talk about the soundness and completeness of the proof system given various restrictions.

Logical Rules of Inference Negations Conjunctions Implications Disjunctions Biconditionals In the Fitch system for Relational Logic, we include all ten of the rules of inference for Propositional Logic. We have negation introduction and negation elimination, implication Introduction and implication elimination, boconditional introduction and biconditional elimination, and we have introduction and elimination rules for conjunctions and disjunctions.

New Rules of Inference Universal Introduction Universal Elimination Existential Introduction Existential Elimination In addition to these rules of inference for logical operators, we have four new rules for quantified sentences - introduction and elimination rules for both universal and existential sentences. Let’s look at them in turn. If you’re like me, the prospect of going through a discussion of four rules of inference sounds a little repetitive and boring. However, it is not so bad. Each of the rules has its own quirks and idiosyncrasies, its own personality. In fact, a couple of the rules suffer from a distinct excess of personality. If we are to use the rules correctly, we need to understand these idiosyncrasies.

Universal Introduction (UI) Universal Introduction (or UI) allows us to reason from arbitrary sentences to universally quantified versions of those sentences (subject to a constraint we discuss shortly)

Examples Premise: hates(jane,x) Conclusion: ∀x.hates(jane,x) Premise: hates(jane,jill) Conclusion: ∀x.hates(jane,jill) Premise: hates(jane,y) Conclusion: ∀x.hates(jane,y) Typically, UI is used on sentences with free variables to make their quantification explicit. For example, if we have the sentence hates(jane,x), then, we can infer Ax.hates(jane,x). Note that we can also apply the rule to sentences that do not contain the variable that is quantified in the conclusion. For example, from the sentence hates(jane,jill), we can infer Ax.hates(jane,jill). And, from the sentence hates(jane,y), we can infer Ax.hates(jane,y). These are not particularly sensible conclusions. However, the results are correct, and the deduction of such results is necessary to ensure that our proof system is complete.

Disallowed case q(x) … p(x) Assumption ∀x.q(x) UI There is one important restriction on the use of Universal Introduction. If the variable being quantified appears in the sentence being quantified, it must not appear free in any active assumption, i.e. an assumption in the current subproof or any superproof of that subproof. For example, if there is a subproof with assumption p(x) and from that we have managed to derive q(x), then we cannot just write Ax.q(x).

Allowed Case q(x) … p(x) Assumption p(x) ⇒ q(x) II ∀y.(p(x) ⇒ q(x)) UI If we want to quantify a sentence in this situation, we must first use Implication Introduction to discharge the assumption and then we can apply UI. Here, we first apply Implication Introduction to derive the result p(x)=>q(x) in the parent of the subproof containing our assumption, and we then apply Universal Introduction to derive Ax(p(x)=>q(x)).

Universal Elimination (UE) Our second rule of inference is Universal Elimination, or UE. Universal Elimination allows us to reason from the general to the particular. It states that, whenever we believe a universally quantified sentence, we can infer a version of the target of that sentence in which the universally quantified variable is replaced by an appropriate term. We’ll explain this condition in a moment. But first some examples.

Examples Premise: ∀x.hates(jane,x) Conclusions: hates(jane,jill) x←jill hates(jane,jane) x←jane hates(jane,x) x←x hates(jane,y) x←y Consider the sentence Ax.hates(jane,x). Jane hates everyone. From this premise, we can infer that Jane hates Jill, i.e. hates(jane,jill). We can even infer than Jane hates herself, i.e. hates(jane,jane). In addition, we can use Universal Elimination to create conclusions with free variables. For example, we can infer hates(jane,x) or, equivalently, hates(jane,y).

More Examples Premise: ∀x.∃y.hates(x,y) Conclusions: ∃y.hates(jane,y) x←jane ∃y.hates(y,y) x←y Wrong!! In using Universal Elimination, we have to be careful to avoid conflicts with other variables and quantifiers in the quantified sentence. This is the reason for the constraint on the replacement term. Consider the sentence Ax.Ey.hates(x,y), i.e. everybody hates somebody. From this sentence, it makes sense to infer Ey.hates(jane,y) i.e. Jane hates somebody. However, we do not want to infer Ey.hates(y,y), i.e., there is someone who hates herself. The problem here is that the quantifier in the sentence has captured the variable being substituted.

The occurrence of y in ∃x.hates(x,y) captures x. An occurrence of a variable ν in ϕ captures a term τ if and only if τ contains a variable μ and the occurrence of ν lies in the scope of a quantifier of μ. The occurrence of y in ∃x.hates(x,y) captures x. Why? The occurrence of y lies in scope of quantifier of x. Let’s formalize this. An occurrence of a variable nu in phi captures a term tau if and only if tau contains a variable mu and the occurrence of nu lies in the scope of a quantifier of mu. What? That’s a fariry complicated definition. Fortunately, it is easy to understand via an example. Suppose we want to substitute mother(x) for y in Ex.hates(x,y). We cannot because the occurrence of y in Ex.hates(x,y) captures mother(x). Why? Because mother(x) contains a variable x and the occurrence of y lies in the scope of a quantifier of x.

Substitutability A term τ is substitutable for ν in ϕ if and only if no free occurrence of ν captures τ. Some texts say “τ is free for ν in ϕ” instead of “τ is substitutable for ν in ϕ”. mother(jane) is free for y in hates(jane,y). mother(x) is free for y in hates(jane,y). mother(x) is free for y in ∃z.hates(z,y). mother(x) is not free for y in ∃x.hates(x,y). mother(x) is free for y in (∀x.∃y.l(x,y) ∧ ∃z.h(z,y)). In what follows, we say that a term is substitutable for a variable nu in phi if and only if no free occurrence of nu captures tau. Some texts say that tau is free for nu in phi. For example, mother (jane) is free for y in hates(jane, y) since the term contains no variables. The term mother(x) contains variables but it is free for y in hates(jane,y) since but there are no quantifiers. The term mother(x) is free for y in Ex.hates(z,y) because the it does not contain a variable that is quantified. The term mother(x) is not free for y in Ex.hates(x,y) because there is an occurrence of y in the sentence that lies within the scope of a variable in mother(x). Finally, mother(x) is free for y in this complex sentence. Although there is a quantifier of x in the sentence, it is okay, because the only free occurrence of y lies outside the scope of this quantifier.

Universal Elimination (UE) Avoiding conflicts of this sort is the main reason for the substitutability condition in Universal Elimination. When using the rule, we should obey the law, and we won’t get into trouble.

Existential Introduction (EI) where τ does not occur in ϕ Existential Introduction is easy. Not much personality here. If we believe a sentence involving a ground term tau, then we can infer an existentially quantified sentence in which one, some, or all occurrences of tau have been replaced by the existentially quantified variable.

Examples Premise: hates(jill,jill) Conclusions: ∃x.hates(x,x) ∃x.hates(jill,x) ∃x.hates(x,jill) ∃x.∃y.hates(x,y) For example, from the sentence hates(jill,jill) we can infer that there is someone who hates herself, i.e. Ex.hates(x,x). We can also infer that there is someone Jill hates, i.e. Ex.hates(jill,x), and there is someone who hates Jill, i.e. Ex.hates(x,jill). And, by two applications of Existential Introduction, we can infer that someone hates someone, i.e. Ex.Ey.hates(x,y). Note that, in Existential Introduction, it is important to avoid variables that appear in the sentence being quantified. Without this restriction, starting from Ex.hates(jane,x), we might deduce Ex.Ex.hates(x,x). It is an odd sentence since it contains nested quantifiers of the same variable. However, it is a legal sentence, and it states that there is someone who hates himself, which does not follow from the premise in this case.

Existential Elimination (EE) Now for Existential Elimination (EI). Suppose we have an existentially quantified sentence with scope phi; and suppose we have a universally quantified implication in which the antecedent is the same as the scope of our existentially quantified sentence and the conclusion does not contain any occurrences of the quantified variable. Then, we can use Existential Elimination to infer the consequent.

Example Premises: ∃x.hates(jane,x) ∀x.(hates(jane,x) ⇒ ¬nice(jane)) Conclusion: ¬nice(jane) For example, if we have the sentence Ex.hates(jane,x) and we have the sentence Ax.(hates(jane,x) => ~nice(jane)), then we can conclude ~nice(jane).

Example - Lovers

Lovers Everybody loves somebody. Everybody loves a lover. Show that Jack loves Jill. As an illustration of these concepts, consider the following problem. Suppose we believe that everybody loves somebody. And suppose we believe that everyone loves a lover. Our job is to prove that Jack loves Jill.

Proof Everybody loves somebody. Everybody loves a lover. Show that Jack loves Jill. 1. ∀y.∃z.loves(y,z) Premise First, we need to formalize our premises. Everybody loves somebody. For all y, there exists a z such that loves(y,z).

Proof Everybody loves somebody. Everybody loves a lover. Show that Jack loves Jill. 1. ∀y.∃z.loves(y,z) Premise 2. ∀x.∀y.∀z.(loves(y,z) ⇒ loves(x,y)) Everybody loves a lover. If a person is a lover, then everyone loves him. If a person y loves another person, then everyone loves y. For all x and for all x and for all z , loves(y,z) implies loves(x,y).

Proof Everybody loves somebody. Everybody loves a lover. Show that Jack loves Jill. 1. ∀y.∃z.loves(y,z) Premise 2. ∀x.∀y.∀z.(loves(y,z) ⇒ loves(x,y)) 3. ∃z.loves(jill,z) UE:1 The sentence on line 3 is the result of applying Universal Elimination to the sentence on line 1, substituting jill for y. Jill is the lover in our problem.

Proof Everybody loves somebody. Everybody loves a lover. Show that Jack loves Jill. 1. ∀y.∃z.loves(y,z) Premise 2. ∀x.∀y.∀z.(loves(y,z) ⇒ loves(x,y)) 3. ∃z.loves(jill,z) UE:1 4. ∀y.∀z.(loves(y,z) ⇒ loves(jack,y)) UE:2 The sentence on line 4 is the result of applying Universal Elimination to the sentence on line 2, substituting jack for x. Jack is the person who loves Jill.

Proof Everybody loves somebody. Everybody loves a lover. Show that Jack loves Jill. 1. ∀y.∃z.loves(y,z) Premise 2. ∀x.∀y.∀z.(loves(y,z) ⇒ loves(x,y)) 3. ∃z.loves(jill,z) UE:1 4. ∀y.∀z.(loves(y,z) ⇒ loves(jack,y)) UE:2 5. ∀z.(loves(jill,z) ⇒ loves(jack,jill)) UE:4 The sentence on line 5 is the result of applying Universal Elimination to the sentence on line 4, substituting jill for y.

Proof Everybody loves somebody. Everybody loves a lover. Show that Jack loves Jill. 1. ∀y.∃z.loves(y,z) Premise 2. ∀x.∀y.∀z.(loves(y,z) ⇒ loves(x,y)) 3. ∃z.loves(jill,z) UE:1 4. ∀y.∀z.(loves(y,z) ⇒ loves(jack,y)) UE:2 5. ∀z.(loves(jill,z) ⇒ loves(jack,jill)) UE:4 6. loves(jack,jill) EE:3, 5 Finally, we apply Existential Elimination to produce the conclusion on line 6.

General Lovers Everybody loves somebody. Everybody loves a lover. Show that everybody loves everybody. Now, let's consider a slightly more interesting version of this problem. We start with the same premises. However, our goal now is to prove the somewhat stronger result that everyone loves everyone. For all x and for all y, x loves y.

Proof Everybody loves somebody. Everybody loves a lover. Show that everybody loves everybody. 1. ∀y.∃z.loves(y,z) Premise 2. ∀x.∀y.∀z.(loves(y,z) ⇒ loves(x,y)) The proof shown here is analogous to the proof we just saw. As always, we start with our premises. These are the same as before.

Proof Everybody loves somebody. Everybody loves a lover. Show that everybody loves everybody. 1. ∀y.∃z.loves(y,z) Premise 2. ∀x.∀y.∀z.(loves(y,z) ⇒ loves(x,y)) 3. ∃z.loves(y,z) UE:1 4. ∀y.∀z.(loves(y,z) ⇒ loves(x,y)) UE:2 5. ∀z.(loves(y,z) ⇒ loves(x,y)) UE:4 6. loves(x,y) EE: 3, 5 7. ∀y.loves(x,y) UI: 6 8. ∀x.∀y.loves(x,y) UI: 7 Once again, we use Universal Elimination to produce the result on line 3; but this time the result contains the free variable y instead of jill. We get the results on lines 4 and 5 by successive application of Universal Elimination to the sentence on line 2. As before, we deduce the result on line 6 using Existential Elimination. Finally, we use two applications of Universal Introduction to generalize our result and produce the desired conclusion.

Example - Harry and Ralph

Harry and Ralph Every horse can outrun every dog. Some greyhounds can outrun every rabbit. Harry is a horse. Ralph is a rabbit. Prove that Harry can outrun Ralph. Here we have another example of our proof system in action, this time emphasizing the usual trick for using existential sentences in proofs. We know that horses are faster than dogs and that there is a greyhound that is faster than every rabbit. We know that Harry is a horse and that Ralph is a rabbit. Our job is to derive the fact that Harry is faster than Ralph.

Harry and Ralph Every horse can outrun every dog. Some greyhound can outrun every rabbit. Harry is a horse. Ralph is a rabbit. Prove that Harry can outrun Ralph. 1. ∀x.∀y.(h(x) ∧ d(x) ⇒ f(x,y)) Premise 2. ∃y.(g(y) ∧ ∀z.(r(z) ⇒ f(y,z))) 3. ∀x.(g(x) ⇒ d(x)) 4. ∀x.∀y.∀z.(f(y,z) ∧ f(y,z) ⇒ f(x,z)) 5. h(harry) 6. r(ralph) First, we need to formalize our premises. The relevant sentences are shown here. [Read the sentences.] Note that we have added two facts about the world not stated explicitly in the problem: that greyhounds are dogs and that our faster than relationship is transitive. Note the existential sentence in line 2. The trick for using such a sentence is to start a subproof, assume the scope of the existential, derive a conclusion in the subproof, and then Implication Introduction and Universal Introduction and Existential Elimination to promote the conclusion to the top level.

Harry and Ralph (continued) 7. g(y) ∧ ∀z.(r(z) ⇒ f(y,z)) Assumption 8. g(y) AE: 7 9. ∀z.(r(z) ⇒ f(y,z)) Here we see this trick at workd. On line 7, we start a subproof with an assumption corresponding to the scope of the existential on line 2. Lines 8 and 9 come from And Elimination.

Harry and Ralph (continued) 7. g(y) ∧ ∀z.(r(z) ⇒ f(y,z)) Assumption 8. g(y) AE: 7 9. ∀z.(r(z) ⇒ f(y,z)) 10. r(ralph) ⇒ f(y,ralph) UE: 9 11. f(y,ralph) IE: 10, 6 Line 10 is the result of applying Universal Elimination to the sentence on line 9. On line 11, we use Implication Elimination to infer that y is faster than Ralph.

Harry and Ralph (continued) 7. g(y) ∧ ∀z.(r(z) ⇒ f(y,z)) Assumption 8. g(y) AE: 7 9. ∀z.(r(z) ⇒ f(y,z)) 10. r(ralph) ⇒ f(y,ralph) UE: 9 11. f(y,ralph) IE: 10, 6 12. g(y) ⇒ d(y) UE: 3 13. d(y) IE: 12, 8 Next, we instantiate the sentence about greyhounds and dogs and infer that y is a dog.

Harry and Ralph (continued) 7. g(y) ∧ ∀z.(r(z) ⇒ f(y,z)) Assumption 8. g(y) AE: 7 9. ∀z.(r(z) ⇒ f(y,z)) 10. r(ralph) ⇒ f(y,ralph) UE: 9 11. f(y,ralph) IE: 10, 6 12. g(y) ⇒ d(y) UE: 3 13. d(y) IE: 12, 8 14. ∀y.(h(harry) ∧ d(y) ⇒ f(harry,y)) UE: 1 15. h(harry) ∧ d(y) ⇒ f(harry,y) UE: 14 16. h(harry) ∧ d(y) AI: 5, 13 17. f(harry,y) IE: 15, 16 Then, we instantiate the sentence about horses and dogs; we use And Introduction to form a conjunction matching the antecedent of this instantiated sentence; and we infer that Harry is faster than y.

Harry and Ralph (concluded) 18. ∀y.∀z.(f(harry,y) ∧ f(y,z) ⇒ f(harry,z)) UI: 4 19. ∀z.(f(harry,y) ∧ f(y,z) ⇒ f(harry,z)) UI: 18 20. f(harry,y) ∧ f(y,z) ⇒ f(harry,z) UI: 19 Here, we instantiate the transitivity sentence for the faster-than relation.

Harry and Ralph (concluded) 18. ∀y.∀z.(f(harry,y) ∧ f(y,z) ⇒ f(harry,z)) UI: 4 19. ∀z.(f(harry,y) ∧ f(y,z) ⇒ f(harry,z)) UI: 18 20. f(harry,y) ∧ f(y,z) ⇒ f(harry,z) UI: 19 21. f(harry,y) ∧ f(y,ralph) AI: 17, 11 22. f(harry,ralph) IE: 20, 21 We form the necessary conjunction and infer the conclusion that Harry is faster than Ralph. Unfortunately, it is still in a subproof, and we need to get it to the top level to eliminate the assumption that allowed us to produce this result.

Harry and Ralph (concluded) 18. ∀y.∀z.(f(harry,y) ∧ f(y,z) ⇒ f(harry,z)) UI: 4 19. ∀z.(f(harry,y) ∧ f(y,z) ⇒ f(harry,z)) UI: 18 20. f(harry,y) ∧ f(y,z) ⇒ f(harry,z) UI: 19 21. f(harry,y) ∧ f(y,ralph) AI: 17, 11 22. f(harry,ralph) IE: 20, 21 23. g(y) ∧ ∀z.(r(z) ⇒ f(y,z)) ⇒ f(harry,ralph)) II: 7, 22 24. ∀y.(g(y) ∧ ∀z.(r(z) ⇒ f(y,z)) UI: 23 25. EE: 2, 24 Finally, we use Implication Introduction to discharge our subproof; we use Universal Instantiation to universalize this result; and we use Existential Elimination to produce our overall conclusion. The main thing to take away from this example is how we assumed the scope of the Existential in line 2, then proved our results, and then used Existential Elimination to promote those results to the top level.