Is Reasoning the Same as Relevant Inference?

Lorenzo Peña

Spanish Institute for Advanced Study (Madrid)

§0.-- Introductory Remarks: Positive and Negative Approaches to Rationality

There are two main approaches to a theory of rationality: the positive one and the negative one. The latter, which has gained increasing acceptance, is primarily concerned with rejecting what is irrational, which usually is equated with what is inconsistent. The positive approach has a quite different purpose, that of studying reasoning and, insofar as possible, enhancing the patterns or standards of our reasoning practice.

We can of course entertain some sort of hybrid hopefully combining the best features of both approaches. Yet, before seriously considering any such idea -- which anyway I shall not explore in this paper -- a closer look at the two contenders in presence is necessary. The negative approach tries to build a theory of irrationality. Its main motivation is that unreason is very common, and in fact perhaps more so than reason, and anyway more threatening. So, a more important task than improving our reasoning is, to start with, sorting out what has any chance of being rational and what definitely has none. The main purpose of a theory of [ir]rationality is to provide us with criteria on what can and what cannot count as rational, in order for us to rule out irrational thinking and behaviour, and then, only then, to go into the remainder -- which is likely to be comprised of a few patterns having escaped doom.

The most common test of irrationality is taken to be inconsistency. Yet, what sort of inconsistency? Negation inconsistency, Post or absolute inconsistency, syntactic inconsistency? If what is at issue is negation inconsistency, then new questions arise: for every negation, or for any negation?

All those questions fail to be raised simply because the inconsistency view of irrationality is as blind as purportedly love or charity are. Their advocates seldom if ever are aware that there are alternative logics according to which different sorts of consistency can be possessed by a theory. They assume that all those sorts of inconsistency are equivalent, which is what classical logicians have taught them.

The negative approach seems to me in deep trouble for two reasons. One is that, once we realize that those different sorts of consistency and inconsistency are by no means equivalent, and that the only seriously disastrous sort is absolute (or Post) inconsistency, or the like, the fear of inconsistency abates. Not that consistency necessarily becomes irrelevant, but its significance is considerably lessened. As for absolute or Post inconsistency, it is no easy task to show that a theory is afflicted with such deleterious ailment, unless of course the theory has a nice axiomatic presentation. And even then, it is not always clear whether the theory can be rewritten, while remaining the same, in such a way that it is healed from the infirmity.

That does not mean that, once we countenance several kinds of inconsistency, no reductio ad absurdum test can be devised. But no such test is henceforth absolute and definitely conclusive. We can ascertain that a theory is in deep trouble, and that it needs some overhauling, seldom if ever that it is hopelessly wrecked.

Moreover -- and this is the second reason why the negative approach is unfruitful -- in order to assess all that, and to adjudicate how dismal the prospects of some theory can be, what is anyway required is a serious and complicated reasoning, positive reasoning. You ought to prove that, under such or such assumptions, the theory suffers from this or that sort of inconsistency, and that the problem can or cannot be redressed with these or those remedies, with such or such results according as one or the other of the available alternatives is chosen, with comparative pros and cons. So, the negative approach can only be a part -- and a small part at that -- of a general, positive examination of reasoning.

Anxiety is not the best adviser. The helping question is not «How to avoid this or that?» but «How to obtain these or those [good] results, how to improve our practice?» Moreover, no conclusive test or final, indisputable guarantee is to be expected, either here or anywhere else. Being appalled by the amount of shear irrationality people are capable of is doubtless a healthy symptom of mental vigour. Allowing oneself to be overwhelmed by such a feeling is not conducive to a further improvement of our reasoning practices.

I think that the deepest and most widespread source of unhappiness with the positive approach is the not unfounded apprehension that it can deliver no criterion on what can in principle be accepted and what is definitely to be rejected. The complaint is not without foundations, since the positive view only tries to study what reasoning is, and hopefully to offer better reasoning patterns. But offering such patterns doesn't further the aim -- if there is such -- of being able to rule out, so to speak before-hand, what is irrational, what lies beyond the pale of reason at all.

The complaint seems to me all right, except that I do not think such situation is to be deplored. Any theory of rationality which has something to offer is able to provide us with partial criteria and ways of improving our reasoning, but none can give us a kind of criterion like the one the negative theorist is looking for. Reasoning is relative, not absolute. Nothing is irrational or rational in an empty space, but only rational or irrational form the viewpoint on some set of premises and inference-rules. Those premises and inference-rules can be challenged, and clinging to them come what may can be irrational -- it will consist in failing to reason; but any further search for warrant or justification in support of the challenged premises or rules will in turn be rational from the viewpoint of some meta-rules, or canons, or principles, and so on. No absolutely firm, final, rock-bed or foundation is ever reached.

It is clear by now that we will be espousing the positive view: to reason is to infer, and since inference is relative, so is reasoning. Which does not mean that any mental process through which, while starting with some antecedent thought, a person proceeds to a consequent thought is to count as rational, or a reasoning. People are liable to many such processes which surely do not count as rational. We are acquainted with sophisms, for instance. And many such processes are perhaps not even sophisms, no claim to inference being made. (Such is more often than not the prose of euro-continental philosophers: they write that sentence, and then that other sentence, and that is that.)

§1.-- The Sources of the Relevantist Uneasiness with CL (Classical Logic)

Since time immemorial, logic has been regarded and advertised as a true master concerning what is real or good or genuine reasoning. Not that everybody has been attracted to such a claim. Hegel complained that you have to know how to reason in order to learn logic, not the other way round (as studying physiology does not help with digestion). On the other hand, a number of contemporary writers, those belonging to the so-called critical thinking school (or movement) are challenging the logicians' claim to be masters of good thought. They contend that the study of logic does not improve ways of thinking, does not provide students with enhanced patterns of reflection or deliberation or with habits of asking questions, of challenging entrenched opinions or biased judgments.

There is a lot to be said in support of such complaints. As it is often imparted, logic becomes dogmatic. Its rules have to be learnt by heart, repeated and applied in a questionless manner. Alternative logics are not seriously countenanced as worthy candidates, and so the patterns of classical logic are not even claimed to be the best ones, but the only ones. Logical deviance appears to be an aberration.

Yet, despite those shortcomings of usual logical teaching, logic is what it has been claimed to be, the greatest master in matters concerning reason. Logic is the study of reason. It is not the study of how people in fact do reason, or do something which they call `reasoning'. It is the study of how they must reason. Why they must reason in some ways and not in others is an important philosophical question which I'll refrain from broaching here. Different answers can be offered for such question, but, whatever the best answer may be, what is certain is that logic alone can aid us to improve our ways of thinking.

My present study is concerned only with a particular sort of reasoning, deductive or demonstrative reasoning. Logicians have more often than not concerned themselves with deduction. Not that they have entirely neglected other sorts of reasoning, but their sin -- if it is one -- of granting almost all their favours to deduction is a venial one, on the ground of extenuating circumstances, like the intrinsic beauty of the subject.

Now, besides the critical thinking school, a number of authors have also challenged the claim that deduction, as it is investigated by logicians, reflects real reasoning, or even some idealized model thereof. The Italian Renaissance philosopher Lorenzo Valla took exception to the Aristotelians' syllogisms on the ground that nobody would reason like that. The blame was repeated well into the 17th century. And nowadays, oddly enough, certain logicians themselves have claimed that such logical systems as they find distasteful are guilty on that count, countenancing as they do unusual ways of proceeding from some antecedent set of claims to a consequent claim. Specially such complaints are often uttered by logicians of the relevantist school.

In fact, Anderson & Belnap (henceforth A&B), in their book laying the foundations of the movement, maintained that their logical enterprise aimed at finding ways of reasoning free from distortions first inaugurated by classical logic and having no natural application in real mathematical thought. They complained for instance that no one would infer an implication from the negation of an implication, and accordingly no implication is entailed either by its own negation or bay the negation or any other implication. Similar claims are frequent in the relevantist literature. The Cornubia rule (p, ~p ∴ q) is rejected on the ground that no mathematician has any use for it.

Should we take such line of argument seriously, what would emerge as the logician's task would be some sort of idealization of real reasoning as it is practised -- an idealization which could take into account some correcting factors, in order to avoid decidedly bad ways of thinking which even mathematicians are not guaranteed against.

So, the founding fathers of relevantism took as one of their main stands the idea that, whereas classical logic may provide people with inference rules which, they were sure, can never lead you astray -- can never get you from truths into complete falsehood --, only relevant logic can be a theory of reasoning. Proceeding from some premises to some conclusion according to classical logic is to infer. To do so only insomuch as the inference rules employed are those of relevant logic is to reason; to reason as people really reason, or as mathematicians reason -- well, let us say, as they reason when they reason well, properly so speaking. If any one reasons as the classical logician has them reason, it is only because they have been perverted or brain-washed.

I am not charging the founding fathers with any sort of psychologistic fallacy of Husserlian or Fregean fame. Doubtless their main idea was (or is) that reasoning is analytic, a priori, and so that what counts as a reasoning is to be provided by «intuition» -- whatever that may be. They implicitly or explicitly reject Quine's holism and they abhor the kind of extensionalism Quine's teachings have fostered. The core of their own enterprise is in a way a reversal to a view of logic more in line with the neopositivists' attitudes (but the neopositivists were guilty of espousing classical logic). Yet, since presumably such intuitions are not the privilege of an enlightened bunch of relevant logicians, looking at how people reason -- or at how mathematicians reason, since they are expected to be the most reasonable among us -- gives us at least a clue as to what the a priori right reasoning patterns are.

There is a striking link between such concerns and the main method A&B resorted to in their treatment, that of natural deduction. In classical logic you can infer from p that, if q, p. The inference is clear, simple and cogent. You only need the metatheorem of deduction, to the effect that, when a conclusion, q, can be inferred from a set of premises, p¹, ..., pn, then, from all those premises except the last one, you can infer that pn implies (or entails, or whatever) q. Since p can be inferred from the couple of premises {p,q}, the result is immediate.

The relevantists' objection is that in such an inference premise q has not been used. We have started with the claim that p can be inferred from p itself, and hence from any set of premises comprising p among other members; q does not enter into consideration, it can be anything you like, however odd. Then we apply the metatheorem of deduction regardless of whether or not the remaining members of the would-be premise set have been used in getting to the conclusion. And so we are stuck with the paradoxical result. What is wrong is that in the first place we didn't conclude that p is true from the antecedent assertion of the premises.

Are A&B rejecting begging the question as a sophism? No, I do not think so. What they take exception to is not inferring p from p, but inferring «if q, p» from «p», which smuggles «q» into the conclusion on the basis of no merit whatsoever. We can infer «p» from the set {p,q} all right, but only because we infer p from p, without no role being performed by the other premise. It is an idle premise.

Thus, the main difference between relevant and classical logic is meant to be that in RL you can infer that p→q only if in securing the conclusion that q you have in fact use p as a premise. You have reasoned from p to q. It is not enough if you have reasoned from a set of premises one of which is p. For reasoning from a set of premises consists in inferring from some members of the set, the only ones which are then entitled to enter the final conclusion.

So the natural deduction techniques A&B implemented with virtuousistic elegance aimed, all of them, to ban encroaching by idler bystanders. Those bystanders are not links in the reasoning chain, and so have no right to receive any prize at he end of the story. This is why in order to reason you have to keep track of what premises have been used and how and when. Then you achieve relevance, whereas inferring as the classical logician preaches allows implicative conclusions where the antecedent has played no role at all in the process and has nothing to do with the consequent.

The relevantist enterprise is thus characterized by some constraints. Classical logic is indifferent to them, since it is only concerned with truth, and nothing else. The founding fathers look upon such a stance as unworthy of logic. Logic is a study of a priori, analytically evident ways of reasoning, which must remain untouched by contingent matters of fact. Must it also remain untouched by necessary matters of fact? On this point, relevantists are not of a mind. A&B clearly favoured the view that relevant system E is a logic of both necessary and relevant entailment, and therefore that such necessary truths as have to be admitted are already taken account of in their logic. But then, what about the idea, shared not only by classicists, but also by modalists, to the effect that a necessary truth is implied by everything? A&B reject that idea, or the formula «p→.q→q». However, if such a formula is generally true, its (purported) truth seems to be independent of matters of contingent fact. So either it is a necessary truth or else a necessary falsity. A&B claim that it is not a truth. They regard it as a fallacy of necessity, since no contingent fact implies a necessitive truth, that is to say a truth which not only is necessary but exhibits its own necessity. According to them, «p∨Np» is necessary, but not necessitive; so to speak, it happens to be a necessary truth, but it does not say that necessarily such or such a state of affairs is necessarily the case.

Of course, the formula «p→.q→q» can be objected to on different, less slippery grounds: it seems a clear case of a (so-called) fallacy of relevance, since the protasis and the apodosis share no variable. But then, what about «Mingle», «p→.p→p»? The same argument does not apply, of course. So, A&B can reject Mingle only on the basis of the very dubious principle of necessitive implication, PNI, according to which no necessitive truth is implied by a non necessitive state of affairs. The same line of argument reinforces the shaky ground -- already examined hereinabove -- to the effect that no implication is implied by the negation of an implication: a negated implication is, if true, contingent, whereas an implication is, if true, both necessary and necessitive.

PNI seems to me precarious and flimsy. The only justification I find for it is just the very general principle of relevance, to the effect that in order for a sentence of the form «p→q» to be true, «q» has to be drawn as a conclusion from assuming the hypothesis «p», and not known on independent grounds; and that such an inference from «p» to «q» has to comply with two constraints, namely that the conclusion follows from the premise alone and that in the inference the premise is really used. Which means that there is an effective reasoning from the hypothesis that p to the conclusion that q.

That a sentence follows from some premise alone means that no other premise, not even a logical theorem, is involved in the inference. But of course that does not mean that the conclusion follows from the premise without the help of some rule of inference. For the relevantists there is all the difference in the world between theorems and inference rules. Any theory built on the basis or RL has to retain the inference rules of that logic, but not necessarily always all its theorems. Logical theorems do not follow «from nothing». A&B denounce the «reason-shattering» classical view that theorems can be deduced from nothing. For one thing. If a logical truth, «p→p», for instance, follows from nothing, or from the empty set of premises, then it follows from any set of premises, in virtue of the general meta-rule of weakening or thinning (what can be inferred from a set S of premises, can be also inferred from any superset of S). Thus, relevance would be destroyed.

So, what emerges as the relevantist view of reasoning or relevant inference is that reasoning is not only independent of contingent facts, but also from the knowledge of necessary facts. The idea, dear to philosophers of the neopositivist tradition, that logic is analytic and a priori, factually empty, purely formal, or contentless, and so on, was hard to unite with the view that each logical truth follows from any factual statement; for then logical omniscience would be attributed to everybody. And even though the logical positivists managed to contrive ways of making that attribution believable, in the end both lines of thought remained hardly compatible.

The relevantists offer us a solution of sorts. What they offer us is more or less this: If «q» follows logically from «p» and your theory has it that p, it claims that q, too. But your theory may fail to comprise many logical truths.

Well, is it so? For any implicative logical truth, «p→q», there is a corresponding inference rule p∴q. And from any inference of «q» from «p», there is, in virtue of the deduction metatheorem, a [meta]inference to the conclusion «p→q». So, either reasoning is not closed as regards such a metarule -- which seems odd, the natural deduction techniques intended to justify RL making much of that metarule -- or else, according to the relevantist approach, your theory can fail to contain the logical theorem «p→q» only if it fails to contain «p». But then Mingle seems to be around the corner again. For «p→p» is a logical theorem. In accordance with our argument, it can fail to be contained by a theory only if the theory fails to contain «p». So, any theory containing some assertion, «p» will also contain «p→p» for that particular «p». But doesn't that mean that even according to the relevantist view of inference «p→p» can be inferred from «p»?

Some new relevantist logicians (the Israeli Arnon Avron, the Spaniard José Méndez) are convinced by arguments to that effect, and so they undertake an overhaul of RL in order to accommodate Mingle. The trouble with Mingle is that, even if on the face of it it seems a very «relevant-like» principle, it leads to clear irrelevance in the presence of other principles commonly accepted in RL. The system RM (R + Mingle) contains such theorems as «p→q∨.q→p» and «p∧Np→.q∨Nq». Even if in the entailment system E those results do not follow from adding Mingle, some other damaging formulae are let in, like the expansion principle: «p→q→.p→.p→q». Such results would ruin the relevantist enterprise. So the new relevantists take a different path. Méndez abandons relevant negation and adopts some weaker sort of negation (by replacing the contraposition axiom schema with a contraposition rule restricted to theorems: if ∴p→q then ∴Nq→Np. Avron junks classical disjunction and conjunction in favour of some new sort of defined pseudo-disjunction and pseudo-conjunction, for which the classical rules of addition and simplification are no longer valid. His rationale for so doing is that «p∨q» (for that new-fangled `∨') fails to follow from «p» when there is no relevant link between «p» and «q». Thus, such an approach can be termed `radical relevantism'. On the other hand, though, Avron's approach -- and any approach validating Mingle -- somehow reverts to a more classical view of the consequence operation. Classically, the consequence operation φ is such that for any set of sentences S S⊂φS=φφS, and logical theorems belong to φS for any S. Within mainstream relevantism -- and more obviously so within deep relevantism -- even though each theory is closed for logically valid entailments, it may fail to be an extension of the logic, that is to say logical theorems may fail to be theorems of the theory. With RM -- which is the system Avron chooses to profess -- you are not very far from the classical conception of the consequence operation.

Thus, the founding fathers of relevantism had good reasons to avoid Mingle. But then aren't we led to the conclusion that the metarule of deduction, even in its relevantist cast and subject to relevantist strictures, is not a rule for which all theories have to be closed? That situation would somehow open a chasm between rules and meta-rules. The solution seems to be this one. Every theory is closed for the meta-rule of deduction all right, but that does not mean that, if it contains «p» it has to contain «p→p», since the meta-rule does not say that «p→p» can be deduced from «p»: what it says is that «p→p» can be deduced from a proof of «p» from «p». Proving something from «p» is one thing; proving it from proving «p» from «p» is another thing.

The difference is by no means trivial. The natural-deduction techniques make much of implementing inferences whose antecedent part is an inference, not a set of premises. The core idea of the rule of →-introduction is precisely that in order for «p→q» to be concluded, it must be inferred from a [relevantly acceptable] proof of «q» from «p». Let us see how that approach blocks the dreadful VEQ (Verum e quolibet: «p→.q→p»). Let us consider the following series of steps:

(2) pihyp

(3) qjhyp

(4) pi(2), repetition

q→p(3), (4)

The inference is not allowed in any relevant system since step (4) has not been deduced from step (3), which is shown by the fact that the subscripts are different. In order for «p→q»K to be inferred from a proof of «q»I under «p»J the set J of subscripts has to be included in I and K will be I--J.

This is a nice idea, and the natural deduction techniques employed in furthering it are both appealing and plausible. It is in the presence of such results that the relevantist claim to have developed a logic of reasoning seems to me strongest. The flags or subscripts are evidence that you are reasoning in proceeding from what is above to what is below, rather than just saying one thing first and another thing later. The classical view seems inattentive to that crucial difference. Classical logic may be a logic of truth-preservation, but not a logic of reasoning.

§2.-- Difficulties Surrounding the Relevantist Program

Unfortunately, all is not well in the relevantist kingdom. Even if the natural deduction techniques A&B developed were quite all right in themselves -- which as we are going to see they are not --, the full implementation of a proof theory, through Gentzen techniques, only succeeds at a terrible price, which makes the outcome a Pyrrhic victory. Instead of having antecedent sets of premises we are now bound to have multi-sets of them, a multi-set being characterized not just by what elements it comprises but also by how many times each of them is comprised by the multi-set in question. Which introduces a new structural connective between premises, the so-called «intensional» connective (although I think the term is particularly ill-chosen, since, even if we do not like the awful complication of multi-sets, they are not intensional entities). That is bad enough. The worst is that even that is not the end for system E of entailment, which lacks the permutation principle «p→(q→r)→.q→.p→r». So the proof theory for E -- or even for subsystems of E which have been shown to be (in principle, if not in practice) decidable -- resorts to three different structural connectives, thus becoming in effect unmanageable. As a logic of reasoning, of a priori evident ways of proceeding from premises to conclusions requiring no special information either about the world or even about necessary truths, the failure seems unredeemable.

There is worse to come. The worst of all concerns conjunction. Within CL from the couple of premises «p» and «q» you can infer the conclusion «p∧q» thanks to the theorem «p⊃.q⊃.p∧q». Since the like is not available in RL, the relevantists fall back on taking the adjunction rule as primitive. Well, the main motivation behind the relevantist enterprise -- at least as concerns their setting up of system E of entailment -- was that «p→q» is true iff «q» relevantly follows from «p». But what about «r» relevantly following, not from «p» alone (or from {p}), but from {p,q}? Well, of course, then the entailment principle gives us an answer: in that case, what is true is «p∧q→r». The set-theoretical partnership between the premises is a structural connection which stands for conjunction (or conversely if you like). Yes, certainly, we all agree, but that introduces two connectives at the same time, not one, which falls afoul of widespread (if a little stiff) constraints about connective-introduction.

To my mind, what is serious is not that the adjunction rule, taken as a primitive, runs counter to such harsh and prim strictures, but that it is underivable within RL, and so no philosophical argument supporting it can have a formal counterpart expressed with logical techniques within the relevant systems. When I say that the rule is underivable I do not mean nothing can be invented at all by way of some weird rule or other yielding Adjunction as a derived rule after all. But no natural path is to be found along which we can get to see that the rule is a correct one -- within the relevant realm, I mean. The ground for that is the unique status of adjunction within RL. In fact that peculiarity points to a weird cleavage between two different sorts of reasoning, according to relevantist standards: reasoning from only one premise with the help of a [premised] implication, and reasoning from two premises, none of which plays [in the inference] the role of an implication (even if it is one).

The dichotomy matters, since the relevantist main idea is that of finding out what alone is involved in a reasoning. Remember, inference rules are not involved. Only premises are involved. No logical theorem or any other truth outside the premises is involved. But there are, in the Hilbert style presentations of relevant systems E and R, only two primitive rules of inference, MP and Adjunction. Moreover, `→' is and has to remain primitive, so no hope of finding natural replacements as [alternative] primitive rules of inference.

Thus, the idea is reasonably straightforward. Its slogan is «No suppression!» CL is enthymematic, and so are many, or almost all, nonclassical logics -- all except those developed within the relevantist school. But the slogan is sooner said than done.

The relevantist slogan amounts in effect to a methodological canon forbidding omissions from the list of statements involved in drawing a conclusion -- which at first glance is OK, since the whole enterprise intends to capture the logic of reasoning. The trouble is that what is involved in an inference is not absolute but relative. Any Hilbert-style system complying with usual standards has alternative presentations, each of which introduces as primitive some inference rules and some axioms. The system can remain the same while the presentation varies, provided the ensuing inferential power remains unchanged. But the power remains unchanged while the primitive inference rules are altered thanks to adjustments in the set of axioms. The chasm between axioms and rules the relevantists enact seems too extreme.

Yet, as already hinted, there are special reasons -- which concern Adjunction -- for being suspicious about the plan. We know that the relevantists' main idea -- as regards the natural deduction technique, which everybody is aware was the core of A&B's whole enterprise -- is that, when you draw a conclusion «p» from premises «q¹», ..., «qn», the flags or subscripts assigned to the premises must be passed over to the conclusion, thus keeping track of the reasoning thread. We could then expect that within that natural deduction framework «p∧q» would be deduced from {p,q} provided the subscripts of both premises are inherited by the conclusion, that is to say provided «p∧q» is assigned all the subscripts of «p» and all the subscripts of «q». But no, things cannot be like that. For then, within system R we could prove «p→.q→.p∧q», and therefore «p→.q→p» -- which, in the colourful expression of Dunn, one of the champions of the school, would be equivalent to washing dirty money through the third world. Within system E that dismal outcome is averted, even with Adjunction formulated as we have just done, but a different if less damaging result ensues: we could then prove «p→q→.p→.p∧q», which is the principle of reduced factor, RF for short. From which the whole Factor would be easily deduced: «p→q→.p∧r→.q∧r».

Factor is a principle which has been studied by the Australian relevantists, specially by Sylvan and by Sylvan & Urbas in a joint publication. Those authors have shown that Factor leads to irrelevance when joined to the axioms of system E but not necessarily so when several of those axioms are sufficiently watered down.

Within the framework of system E of entailment, Factor (or equivalently RF) immediately lead to validating as a theorem the principle of implied self-implications, PII, «p→q→.r→r»: any self-implication is implied by any implication. True, that result is very very far from the banned VEQ. A system with PII may nonetheless comply with some of the relevantist strictures. For instance it may fail to have a strongest formula, one, «p», that is, such that for all formulae «q»: «p→q». It may have the Ackermann property, which means that for no implicative «q» is «p→q» a theorem, when «p» is a sentential variable. In fact a system with PII (or equivalently -- within the framework of E -- with Factor) may remain miles away from classical logic. However, such a system is no longer a relevant system. For a minimal, necessary (although not sufficient) condition of relevance is that no formula «p→q» is a theorem if «p» and «q» share no common variable.

§3.-- Three Ways Out -- or the Reasons for a Gradualistic Appropriation

It is a widespread tendency in human behaviour to be content with nothing less than grand thoroughgoing principles, adherence to which can afford stable situations. Most of the time, things are less straightforward and more complicated than we had fancied. The «all or nothing» criterion is likely to lead us astray.

That seems to me the case as regards the idea of relevance. It was a nice idea. It appealed to some qualms a number of students have felt over the years when first becoming acquainted with classical logic and the apparently whimsical turns of classical inference. Yet implementing the idea up to the hilt leads to such amazing results that you ought to wonder if the price is right.

There are several ways out. One is offered by what I have called radical relevantism, that of Méndez and Avron. The word may not be apposite, since in some important (and relevant) respects it amounts to jettisoning a nuclear part of the original relevantist plan, namely having systems with the Ackermann property, and avoiding that a logical axiom follows from a contingent sentence; in other words, claiming that from «p» alone nothing follows except «p» (and «NNp» and «p∧p» -- and also «p∨q» and «q∨p» etc?); in particular nothing follows which we can also know from other sources -- as logical theorems can be, through the study of logic presumably. (Radical relevantists fall back on the converse Ackermann property: no theorem of the form «p→q→r», where «r» is a sentential variable. In some important sense, as will emerge below, that kind of approach is the dual of the one I shall be suggesting towards the end of the paper.)

The second way out is offered by deep relevantism, so-called, whose main champion is Richard Sylvan. Its general plan -- except perhaps as concerns Factor and some other isolated principles -- is to further weaken system E. There are a number of features separating Sylvan's philosophical enterprise from the original relevantist idea, not all of which are directly related to his weakening proposal. Such a proposal can be independently defended on the ground that it is more cautious to assert less: if we can implement logical systems sufficiently useful for reasoning keeping clear of the more controversial principles, it is reasonable to refrain from asserting those principles: since logic is a priori and acquired through (considered or reflective) intuition, the more controvertible a principle is, the less likely it turns out to count as a genuine a priori, analytic, non-factual truth. The other main idea in Sylvan's own enterprise seems to me a quite different and even, to some extent, opposite idea. The founding fathers loathed contradictions as much as the classicist, and never for a minute thought that a contradiction could be true at all; their objection to classical logic was not that, in virtue of the Cornubia rule, it could lead from a true contradiction to an utterly false conclusion, but that it leads from statements taken as premises to a statement taken as [pseudo]conclusion which in fact has nothing to do with the premises. As against that point of view, Sylvan has been led little by little to the idea that there can be true contradictions, and in fact that there are. Now, that may be the case, and I am in fact sure it is the case. Yet, in an important sense, this runs against relevantism as initially conceived. For if the Cornubia rule is to be rejected on the ground that a contradiction can be true after all, the classical view that what is [utterly] impossible implies everything is not challenged: you only displace the bounds of the impossible. Of course, you can be both a relevantist and a believer in true contradictions. Still, you are then bound carefully to sort out your grounds for each of your departures from classical logic, or from any logic you happen to take as your starting point. Finally, and most significantly for our present concerns, Sylvan has developed a very different approach to a logic of reasoning, which is at variance with the classical outlook in a much more radical way than even main-stream relevantism does. In so doing, he renounces the claim that RL is in general a logic of reasoning. Yet, canvassing the pros and cons of Sylvan's plan for a logic of reasoning goes beyond the scope of the present paper.

A third way out is provided by a gradualistic appropriation of the relevantist plan. `Reappropriation' seems to me the right word, since gradualism and relevantism have not been bed fellows, their leanings taking them apart from one another. The main idea of gradualism has little to do with relevantist concerns. In fact it consists of recognizing that there are degrees of truth and, accordingly, since what is to some extent true is true, there are true contradictions, but that in so much as the general classical view of logic can be adapted to such an acceptance of degrees of truth, it can and must remain unchanged in all other respects. In particular, gradualism has been keen on keeping, alongside a weaker or natural negation, a strong, classical negation, endowed with the reading `not...at all', through which in fact systems of gradualistic logic are conservative extensions of CL, which maintain not only all classical theorems, but also all classical inference rules (provided the translation of classical negation is strong negation, of course). Thus, the gradualistic approach relinquishes the Cornubia rule for natural or weak negation but keeps it for strong or classical negation. However, if you have a strong negation, you are bound to countenance inferences which fall afoul of the relevantist constraints. You can no longer pride yourself on being relevant in that sense. Not because you avoid «p∧Np→q» if you accept «p∧¬p→q», `¬' being strong negation. Your logical choice may have a number of reasons to recommend it, but not the general unqualified principle of relevance.

This state of affairs probably explains why thus far no bridges have been built between the two schools. Their original motivations kept them quite apart. Gradualism has remained adamant in its closeness to the classicist's main ideas and in fact it has been developed with forceful allegiance to an extensionalistic, Quinean approach to many philosophical subjects. The idea of degrees of truth is compatible with extensionalism, and in fact is the only ground on which Quine himself has contemplated abandoning CL (in his «What Price Bivalence?», JP, 78/2 [febr 1981], pp.90ff).

However, logic has a number of surprises to offer. One of them is that gradualism is a not so distant relative of relevantism, which is going to become clear through some mending (in fact a powerful strengthening) of system E. Needless to say, the kind of moderate, middle of the road approach I am going to sketch as the final part of this paper entails renouncing the unqualified main tenets of the founding fathers and probably of most relevantist thinkers. A number of inferences which do not conform to the relevantist constraints have to be accepted. To such extent, the main motivation of the relevantist movement -- to capture a logic of reasoning, in a somehow puritanical sense of the word -- seems to me hard to retrieve. Yet, something in the neighbourhood emerges, something through which the relevantist enterprise is vindicated all the same.

§4.-- A Gradualistic Construal of Subscript-Assignment, and how to Strengthen System E

The relevantist implementation of natural deduction techniques consists in assigning subscripts to the premises and thus keeping track of the thread of the argument. Since reasoning is putting forward grounds for getting some conclusion, the procedure seems quite reasonable. In fact the relevantist logicians didn't need to invent it, since it had already been designed even within the framework of CL as a didactic tool. It had only to be put to a more substantial use.

Does the idea work? Well, within the relevantist program only after a fashion. The trouble comes with Adjunction, as we have seen. The natural way of countenancing Adjunction fails. It could be enacted as a strengthening of system E, but, as we have seen, that would entail acceptance of Factor and PII. The relevantist logicians offer us some makeshifts; for instance «p∧q»i can be inferred from the couple of premises «p»i and «q»i, that is to say both premises have to possess the same subscript. Which in practical terms means that outside logic nothing can be inferred from premises «p» and «q», given independently from one another. Any nonlogical theory has to be provided with only one axiom, which can be a conjunction of formulae, or else nothing can be inferred from the separate axioms, unless they are cast in terms allowing use of implicative MP.

What in effect the relevantists are doing is to reduce Adjunction to a systemic rule, which is to be applied only to such premises as are logical theorems. For a logic of general reasoning such a step is a policy of despair. With Adjunction so hamstrung no bright prospect is opened for reasoning.

Now, what if we strengthen E through Factor, thus unshackling Adjunction at the same time? We have seen that the general principle of relevance will no longer be in operation, since we'll have «p→q→.r→r», the PII. But can something of the initial implementation of natural deduction techniques be rescued all the same? Yes, a lot of it can be saved. But an overhaul is necessary, and a different interpretation is to be put on the whole assigning of subscripts.

The interpretation to be now considered is that keeping track of the use of the premises is a guarantee to the effect that the conclusion is not less true than the premises. That idea is closely connected with a program put forward by Guccione & Tortora, two Italian logicians working in the field of many-valued and fuzzy logic. And surprisingly once at least within the relevantist movement -- apropos system RM, which, granted, is no longer a relevant system of logic -- Robert K. Meyer developed similar ideas. With such an overhaul, the target is no longer that of keeping clear of irrelevancies, but that of avoiding an increase in the degree of falseness of your assertions.

So, let us think of subscripts assigned to the premises as variables ranging over degrees of truth or falsehood. The main idea in now that from «p→q» and «p»i you can conclude «q»j provided you jot down that j≤i (the degree of falseness of the conclusion does not exceed that of the premise). What about the very same implication, «p→q»? Doesn't it receive a subscript? All asserted implications receive the same subscript. Their degree of truth is immaterial. In fact there are grounds for regarding implication as two-valued -- which does not mean that the two values must necessarily be the two classical extremes of complete truth and complete falsity.

Thus implications are «special». It seems to me this is as it should, even from the very same relevantist motivations. After all the founding fathers made much of the cleavage between facts and entailments. Not that I think they were quite right on that score either, since entailments are entailmental facts; their looking at entailments as non-facts is perhaps connected with their acceptance of system R as a relevant logic: if an implication is, if true, a fact, then the permutation principle is hard to believe: even if p is relevant for the fact that q is relevant for r, it does not follow that q is relevant for the fact -- if it is one -- that p is relevant for r. Likewise, even if the authorities' carelessness causes that the earth-quake causes many damages, it does not follow that the earth-quake causes that the carelessness of the authorities causes many damages.

Once we accept Factor -- without relinquishing any other E principle or rule --, things begin to be straightened out, and a number of oddities in system E vanish. For instance, with system E you cannot infer «p→.p∧q» from «p→q». Yet, in any theory wherein you have, for some formulae «p» and «q», both «p→p» and «p→q» you'll also have «p→.p∧q» -- if Adjunction can be applied to those theorems. How is that possible? The answer is that inference, in the canonic relevant sense, is not the usual consequence operation. A theory's being closed for some operation φ [on sets of of formulae] is neither a necessary nor a sufficient condition for it to be the case that within such a theory formulae in φS can be inferred from the set of formulae S. Yet, isn't it really odd that within a theory in which two implications are theorems which share all their atomic formulae and are in fact very similar, «p→q» and «p→.p∧q», the latter cannot be inferred from the former? What else do we need in order to be able to draw the conclusion?

Now, with our overhauled implementation, we have a different situation: any asserted implication in a system can be inferred from any implication (can be inferred provided it is asserted, of course). As regards implications, our ways are classical. That does not mean that an asserted implication can be inferred from nothing or from anything. The Ackermann property keeps sway. And proof theory becomes so much simple!

Inference, so implemented, is still not necessarily the same as the consequence operation. A theory may contain theorems less true than true implications are; yet an implication, if true, does not imply those theorems. On the other hand, a true implication is not implied by all sentences; so, in particular it is not the case that for any two theorems, «p», «q», p∴q. But we can (and must) also recognize a different inference relation which coincides with the consequence operation in the usual, classical sense; let us say, Ç. From S∴p it follows that SÇp, but not conversely. (There are other links between Ç and ∴: if SÇp, there is some truth, «q» such that S∪{q}∴p.)

While failing to be identical to the consequence operation (Ç) outright, ∴ as now conceived is much much closer to it, due to the particular status of implications. But it is not close enough yet. Even with Factor and PII implication is still too distant from our methodological maxim: «Remain as close to the classical model as is compatible with carrying out your program of a logic of degrees of truth». We have made implications classical in some sense by rendering all true implications equally true. But what about false implications? We'll advance in our classicalizing enterprise by rendering all implications which are not true enough to be assertible completely false. Which means that we countenance the principle of implicative funnel: «p→q→r∨.p→q»: an implication is either true or else so false that it implies everything. From a proof-theoretical viewpoint that means that we split our proofs in two branches: one wherein we suppose that p∴q, the other wherein we suppose that p→q∴r; if the same conclusion follows from both branches, we'll assert it.

But there is a similar consideration as regards another classical principle, compatible with our overhaul of relevant implication as a connective expressing that the degree of falseness of the apodosis is at most as high as that of the protasis. I am referring to the principle of linearization, «p→q∨.q→p». Same procedure: we split our proofs, and look out for the outcome.

§5.-- Proving (and Deriving) what in E has to be Taken as «Given»

Two striking results follow. One, Adjunction may cease being a primitive rule. We can countenance this rule as the only inference rule in our Hilbert-style system: for 1≥n, «p¹→q∨.p²→q∨....∨.pn→q», «p¹», ..., «pn» ∴ q. When 1=n, it is MP. The rationale for the rule is that either p implies p-and-q or else q implies p-and-q.

Second striking result: a number of «interpolation principles» may become axiomatic -- in alternative presentations of the system -- through which some principles accepted, so to say, blindly and without justification in system E can be provided with reasonable warrant. For instance, E countenances distribution: «p∨q∧r→.p∧r∨.q∧r». Why is it true? Within the framework of the system we are now considering, its proof is obtained from linearization (and Factor): since the disjunction between p and q either implies p el else implies q, it is immediate that the conjunction between such disjunction and r implies either p-and-r or q-and-r. More importantly, such (widely challenged, and yet to my mind correct) principles of system E as conjoined assertion («p→q∧p→q») and contraction («p→(p→q)→.p→q») now become provable: the former is proved from implicative funnel: we have as a particular case of implicative funnel that «p→q→q∨.p→q»: each of the disjuncts implies the principle of conjoined assertion. Another formula which E countenances as an axiom (by force, so to speak --using A&B's own words) is «p→r∧(q→r)→.p∨q→r». It seems very clear to me that the axiom is not obvious. A natural link is missing, which is provided in our system by «p∨q→p∨.p∨q→q». Another principle which is not altogether obvious is the principle of conjoined apodoses: «p→q∧(p→r)→.p→.q∧r», which can be proved, too, as a theorem within our system. The most controversial principles of E thus become theorems and are endowed with enhanced evidence (although to be sure our new axioms are probably as controversial as those of E).

In the same way, some suppression principles implicitly accepted in E (as R. Sylvan has pointed out) through which exported syllogism could be justified (but in E it is not justified, just taken as an additional primitive and underivable evidence) now can be taken as axiomatic, thus rendering exported syllogism a proved theorem. For instance an Adjunction principle for implications: «p→q→.r→s→.p→q∧.r→s». The general Adjunction principle is wrong, but we are by now aware that implications are special. That gives us a rationale for those suppressive principles of system E.

What constitutes E's weirdness in that connection is that it countenances suppression in exported form but not in imported form; else, Factor would become provable. In particular from «p→q∧(r→r)→s» the protasis's second conjunct cannot be suppressed.

Moreover, some other anomalies of E are cured. For instance in E there is an asymmetry between disjunction and conjunction: «p→(q∧r)I.p→q∧.p→r» (`I' is mutual implication), but not «p→(q∨r)I.p→q∨.p→r». Likewise, in E we have «p∨q→rI.p→r∧.q→r» but not «p∧q→rI.p→r∨.q→r». All those equivalences obtain in the system we are sketching.

§6.-- Conclusion

From an orthodox relevantist viewpoint -- if there is such a thing -- all this enterprise is pointless, for we are doomed to countenance irrelevances. That seems to me the usual «all or nothing», a bad maxim we had better shy away from. Our middle course offers a view of reasoning (if you like, of some idealized sort of reasoning) which is in between that of classical logic and that of orthodox relevantism. I call it relevantoid logic. It eschews VEQ («p→.q→p»); it has the Ackermann property. It has (at least within the domain of the sentential calculus) the entailment property («p¹», ..., «pn» ∴ «q» iff «r→r» ∴ «p¹∧...∧pn→q»). It avoids the Cornubia rule for non-strong negation. It also avoids unqualified exportation («p∧q→r→.p→.q→r»). It also avoids the validity of the Dugundji formulae (for any finite n): «p¹Ip²∨.p²Ip³∨....∨.pn-¹Ipn».

The system just sketched is close enough to the most widely publicized relevant system of entailment logic, E, with which, despite the chasm our strengthening has opened, sufficient closeness remains to allow bridges to be built. (On the other hand, our system is much closer to classical logic than in fact almost any other nonclassical logic; to be more specific: we are very close to accepting what the classicist accepts, but at the same time we are far apart from the classicist attitude as rejection is concerned: we refrain from rejecting true contradictions, while the classicist wrongly equates rejecting something with asserting a negation thereof.)

I think this system is a better logic of reasoning. Reasoning as thus implemented is of course somehow artificial. I do not deny that more natural systems can be found. But naturalness has its price, too. I wonder if some part of the task can be afforded by a pragmatic rounding out of purely inferential logic. But those are matters for a further inquiry.