SORITES ISSN 1135-1349

Issue #03. November 1995. Pp. 12-26.

A Naive Variety of Logical Consequence

Copyright © by SORITES and Enrique Alonso

A Naive Variety of Logical Consequence

Enrique Alonso

Section 1. Two dogmas

This issue argues for a revision of some of the conditions traditionally imposed on any definition of logical consequence. These conditions could be summed up in two dogmas:

[1] Any precise definition of consequence relation on a formal language can be carried out by means of two kinds of resources, syntactic and semantic. There is not a genuine logical system for which only derivability (entailment) could be formally set up.

[2] Derivability for a formal system can be alternatively defined by means of a variety of syntactic resources: axiomatic systems, natural deduction, etc. In contrast with this situation, entailment presents a relatively stable and universal definition: truth-preservation with respect to some class of models.

It is obvious that our first dogma does not say that every formal system must have equivalent proof-theoretic and model-theoretic definitions of logical consequence. Apparently it only affirms that every genuine formal system can be alternatively analyzed in terms of proof-theoretic notions and model-theoretic ones. Nevertheless I think this dogma depends on a deeper thesis, that is, the thesis that present formulations of derivability and entailment respond to some frontier inside human mathematical intuition. There are not mixed-definitions inhabiting the space between derivability and entailment, there are not techniques combining proof-theoretic methods with model-theoretic ones to produce new consequence definitions.

This context justifies the importance conferred to soundness and completeness results. To prove the extensional equivalence of two relations defined by means of very different tools is always a matter of some interest and many times it yields positive mathematical knowledge.

Nevertheless, I think that nothing justifies the blank between derivability and entailment. The imaginary frontier dividing these fields could be -- for some elementary cases -- more a matter of convention than a genuine mathematical fact. This is the part of the first dogma I do not accept. I think that it should be possible to define new relevant varieties of consequence not obeying this traditional distinction and making use of semantic techniques as well as syntactic ones.

The second dogma listed above constitutes properly the subject of this issue. In this dogma I mention a relatively stable an universal definition for semantic consequence whose format I offer now:

[3] \\cap_gamma//|-\\beta// iff [?]I[?]Iv[([?][?]i[?]\\cap_gamma// I([?]i)[?]D+) [?] I(\\beta//)[?]D+], where,

1. I ranges over Iv,

2. Iv is the set of every admissible valuations, and

3. D+ is a proper subset in the range of valuation functions I. I call this subset the set of designated values.

The element in this format which can be modified to give place to almost every imaginable semantic consequence relation is the set Iv. We can consider as admissible valuations over a formal language a great variety of mathematical objets. In fact, it could be a hard task to impose any limits whatsoever to what is admissible at this point. However, the other components in [3] do not offer a comparable level of variation. In fact, it is difficult to imagine any alternative to [3] different from it in some relevant aspect other than Iv. Universal quantification over valuations in some set, universal quantification over formulas in the set of premises, and material conditional between premises and conclusion are features which seem to be intrinsically related to our basic intuitions about semantic consequence.

The variety of consequence I try to define departs from tradition in one of these fundamental features. I have said that the set Iv can be illustrated by different mathematical objects. However, we always have a completely defined set of admissible valuations settled by a precise definition. If we change our set Iv we automatically change our logic. Is it possible to consider a family of sets Iv where we usually take only one? Can we have a suitable semantic definition of a consequence relation based on a substantive family of sets of admissible valuations?

I agree with some deviant schools -- relevant and paraconsistent logicians -- that some of the conditions imposed by classical logic over sets Iv to be admissible are overrestrictive. Nevertheless I do not think that the solution is merely to liberalize these conditions looking for more permissive ones. This strategy does not differ from tradition in one fundamental aspect: it is always necessary to have some precise criteria in order to define the correct extension of Iv.

What seem to be wrong this time is the strong dependence of a given criteria to define an unique set Iv, a set which remains constant with total independence of the content of those arguments whose validity allows to judge. If we consider a set of criteria sensitive to information codified by standard propositional language we obtain a family of sets Iv depending on different admissibility conditions, conditions which will be settled by linguistic information codified by arguments in our language.

This suggestion could seem to be paradoxical at a first glance. How is then possible that information codified in arguments could determine admissibility criteria over Iv if all the information we can see codified into a formula is the subset of Iv which satisfies it? Nevertheless I are not alone defending new kinds of relations among language, information, admissible valuations and logical validity.

I think of the Heterogeneous Logic of I. Humberstone -- Humberstone [1988] -- as a first instance along this line. The novelty supplied by Humberstone is the consideration of two sets of admissible valuations, one for evaluating formulas in the set of premises of a given argument and the other for evaluating the conclusion. Reflexivity, monotonicity and other abstract properties predicable of consequence relations can be recovered by means of conditions relating assignments over sentential variables in premises with those over variables in conclusion. Underlying to this development we can find a very remarkable suggestion: information codified by premises via valuations can contribute in a different way to validity that information supplied by conclusion.

In fact, this is the point which serves to H. Marraud -- Marraud [1994] -- to elaborate his own suggestion. Under this logic, the set of premises plays a new role with respect to argumental validity. Formulas in premises determine admissibility conditions for those valuations relevant to judge the argument. It would take some time to give a more complete description of this issue so that we omit the details here.

Humberstone and Marraud offer good instances of what can be taken as a new line of research. One which considers that some of the information codified in an argument can have a definitive influence over the aspect and properties of those mathematical objects relevant to set up its validity.

Section 2. Avoidable commitments

The second dogma described above imposes two kinds of conditions over any suitable semantic definition of a relation of logical consequence. First of all, logical meaning, in the sense of those set of valuations which satisfies a given formula or set of formulas, has to explained in terms of a set belonging to another set fixed from the beginning. In other words, to assign a meaning to a formula we proceed to determine a subset of a previously fixed set -- Iv -- following appropriated instructions. Secondly, we are supposed to assume as true those conditions implicitly or explicitly followed in the inductive definition of the set Iv.

I do not think these two commitments are of the same importance for our investigation. In fact, I only mention the second one by historical reasons. Nevertheless, it could be of some help in what follows to analyze briefly this point.

R. Routley -- Routley [1979] -- points out the existence of an ontological commitment lying down classical logic. An examination of relevance failures in classical logic shows that part of responsibility for these failures is owned to admissibility conditions over valuations. These conditions do not respond, following Routley, to considerations about logical structure, in fact there are not reasons of a purely logical character which could explain some of the requisites classical logic imposes over valuations. The conditions mentioned by Routley are those referred to assignments over variables, that is, those which establishes as a matter of pure logic that the only way to assign a value to a sentential variable is by means of a function whose domain is the set of sentential variables and whose range is {t,f}.

Tradition considers truth functions as a natural basis for assignments over variables. Routley's argumentation shows that truth functions are the resource that classically minded logicians employ to retain some ontological thesis referred to truth and falsity. The world would be furnished in such a way that sentences always have a truth value and never have more than exactly one. To put it otherwise, the world -- at least the idealized world logicians consider in focus -- only admits consistent and complete state-descriptions.

Till now the argumentation sustained by Routley is, from our point of view, basically correct and highly suggesting. Nevertheless I do not consider his solution an effective way to avoid the ontological commitment just identified. It is true that the problem, so posed, seems to offer an immediate solution. If we admit incomplete and inconsistent assignments over variables the ontological commitment vanishes. It only rests to identify a mathematical resource capable to do the task truth functions execute in classical logic.

Every beginner in non-classical logics can enumerate a list of mathematical techniques developed to do the job. I mention only three: 1) the inclusion of two defective truth values corresponding to non-standard assignments, that is, corresponding to gaps and gluts, 2) the relevant semantics developed by Australian relevant school based on an involution operator «*» inside a Pseudo-Kripkean semantics, and finally, 3) to design a more general resource to assign value to variables, that is, to consider relations of a certain kind where we made use of truth functions.

The first strategy, sometimes used for technical reasons, is the worst response one could afford to solve the problem of the ontological commitment. The inclusion of new values in the set {t,f} only suggests a change of ontology to the effect of liberalize overrestrictive conditions formerly sustained.

I think that Routley's position should be defended on a very different basis. If classical truth functions have to be presented as a subtle way to introduce some ontological thesis in logical machinery, we think that it should be of some interest to find out some mathematical resource of a more fundamental character. Relations R[?]Lx{t,f} seem to offer the desired tool. Let us note that these relations allow at the same time to codify inconsistent and incomplete assignments, to consider one of these situations independently from the other and finally, to recover classical valuations as a very special case of relations, viz., as total functions.

Relations of the type just described present classical functions as an elaborate tool obtained by successive addition of extra criteria. Moreover, we can obtain this family of semantic resources without removing classical features of sentential connectives, that is, connectives are not responsible for any changes.

If we were to accept Routley's thesis, the strategy which allows relations where we formerly put total functions seems to be the most satisfactory one. It is quite difficult to imagine some other alternative capable to fulfill the requisites demanded by Routley's thesis. The central point of this thesis states that ontological commitment is a consequence of unjustified restrictions over admissible classes of models. Those restrictions are not referred in this case to connectives but to assignments over variables. Relations so stated, provide a starting point for the valuations demanded by Routley, and therefore, they should avoid the acquirement of any kind of ontological commitments.

Routley's thesis depends hardly on an higher level supposition: «there are mathematical tools -- or ways to deal with mathematical resources -- which do not determine how things are when used to define a suitable semantic». It is true that relations in Lx{t,f} show that total functions could be an overrestrictive starting point to do semantics, but it does not mean that we could find out mathematically neutral devices to assign values to sentential variables. In our opinion such ideal starting point is an illusion which results inconsistent with the existence of a previously defined set Iv of admissible valuations and outside of which nothing count as a suitable interpretation. I think that ontological commitment has to do with the existence of this set such as it is conceived by tradition and expressed in [3].

Once we have an assignment over variables which respond to admissibility conditions previously and rigidly stated, we can obtain ontological conclusions with respect to the way language represents facts in the world. Admissibility conditions, however permissive they are, stated at start as definitory conditions for a logic only can be explained through discourses about how things are -- or about how we think things are, and so on. Our thesis is that ontology cannot be completely avoided without a deep revision of the way language acquires meaning through semantic machinery.

Section 3. Admissible valuations and significance

The basic claim sustained by Routley referred to the convenience of a logic independent of considerations about ontology is highly valuable. We agree with Routley in the necessity of intensive research around this problem. Nevertheless we do not think his strategy or those strategies devoted to liberalize admissibility conditions for Iv could achieve an effective solution.

We have conjectured that the solution for this problem has to do with the way in which admissible interpretations are introduced to define semantic consequence. Lines before we claimed for a new relation between arguments and valuations relevant to judge their validity. This relation has to do with information codified by language independently from that obtained from satisfaction with respect to some set of admissible valuations or models.

In what follows we are going to explain the apparent paradox contained in our suggestion. We want to describe some procedure which allows to obtain information from sentences in an argument to the effect of determining the conditions which have to obey the valuations relevant for the validity of this argument. If we have success in the task just enunciated we could arrive at a definition of consequence independent of any set of admissible valuations fixed from the beginning. In fact the valuations relevant in each case would depend on formulas in the argument analyzed in each moment.

Classical propositional calculus makes of sentences such as \\beta//&¬\\beta// and \\beta//[?]¬\\beta// very special cases relatively to the set of admissible valuations stated for the matter. Classical tautologies and antilogies have the salient feature to express in the object language conditions imposed in the metalanguage over admissible valuations. Their special status is owned to the fact that those admissibility conditions have been settled from the beginning and remain constant. The logical meaning of a tautology is, under these considerations, the set Iv of every admissible valuations meanwhile the meaning of an antilogy is the empty set. The weight that sentences of this kind have over the validity of a given argument under a definition of consequence like [3] is therefore a very particular one and very different from the weight that contingent sentences have.

We'll say that every sentence whose logical meaning corresponds to the entire space of valuations, or alternatively to the empty set, present a «conflict of significance». We'll make extensive this term to sentences not incurring in that situation but containing subformulas which present such conflict.Nevertheless, it is quite easy to find contexts where tautologies and antilogies are used in a significative way. They are not used to make mention to admissibility conditions -- to some informal mate -- but to genuine information. This fact takes place, for instance, when we discover, perhaps with some surprise, that some sentence \\beta// and its negation are both true. We do not want to say that \\beta// is therefore paradoxical, we only want to express what we have said and the way to do that is by means of a true -- not paradoxical -- contradiction \\beta//&¬\\beta//. Something very similar could be said with respect to the falsity of tautologies. We can think of situations which make false \\beta// and its negation as giving place to a false -- not undefined -- tautology.

These real life considerations can be found amongst the motivations of many partial and paraconsistent logics. The most significative developments in these areas adopt technical devices which generalize anomalies -- gaps and gluts -- equating them with standard valuations -- true and false. I do not think situations like those we have described are the norm but the exception. Priest has defended this same position about paraconsistency in many places but his logic LP does not comply with this intuitive principle.

The pool of strategies developed to capture inconsistent (partial) situations via valuations generalizes this possibility allowing assignments which attribute the value «paradoxical» («undefined») to every sentential variable. This is an immediate consequence of liberalizing the admissibility conditions associated to the set Iv and therefore an effect of the standard definition of semantic consequence relation.

Our task will be to define a consequence relation whose validity criteria includes amongst other things, the condition that every formula present in an argument occurs significatively in the context supplied by that argument.

A significative occurrence of a formula in an argument is an informal notion which requires additional comments. Nevertheless, we cannot say very much in this moment. We know that a significative lecture of a formula can make true a classical antilogy declaring true and false some sentential variable and that some parallel situation can stated for false tautolgies. We also know that this movement should not be predicated of sentential variables whatsoever, we do not accept inconsistencies or incomplete information without an explicit and concrete reason to proceed in that way. We can conclude therefore that significative lectures of formulas cannot be obtained merely modifying the acceptability conditions for valuations. This operation would affect to the entire language and this is a possibility we explicitly reject. If we analyze an argument looking for significance conflicts, it is possible that some sentential variables should be interpreted as allowing gaps (gluts) meanwhile the rest retain a perfectly classical behavior or it is still possible that more complex alternatives have to be considered.

The procedure to be developed will take as starting point the context supplied by an argument -- eventually a set of formulas -- locating significance conflicts which affect to sentential variables occurring in that argument. Once we have identified these sets of variables we have to determine the conditions to be satisfied by relevant valuations for the validity of the argument, leaving apart all those sentential variables which does not present any conflict in that argument. It would be a lost of time to delay the formal translation of these considerations.

Section 4. A significative variety of consequence

In the sequel we are going to adopt a formal expression for our fundamental notions and concepts. Anyhow we avoid detailed proofs which many times do not offer extra information to the reader and prevent an illuminating comprehension of the main ideas.

In what follows we are going to deal only with finite sets of formulas. As we shall see this is a restriction associated to some essential features of the procedure developed to establish significative lectures of formulas in arguments.

Definition 1: Let \\cap_gamma// be a finite set of formulas and let [?] be its characteristic formula, i.e., [?]=[?]\\beta//i for \\beta//i in \\cap_gamma//. We shall call µ a bearer subformula in \\cap_gamma// (in [?]) iff µ is a classical tautology or antilogy and no subformula of µ has this property.

To identify bearers in \\cap_gamma// -- or in formulas whatsoever -- is the first step to obtain a significative lecture of formulas in some set \\cap_gamma//. Let us note that the notion of a bearer relative to some set \\cap_gamma// goes beyond the notion of tautological (antilogical) sentence. We can have sets whose respective characteristic formulas are not tautologies (antilogies) and have bearers in \\cap_gamma// pointing out inner conflicts of significance in that set. Let us note that a bearer in \\cap_gamma// goes beyond classical tautlogies (antilogies) also in another sense: only innermost classical tautologies (antilogies) are bearers. We proceed to identify first the smaller pieces of classical structure which can result responsible for conflicts of significance.

To find out bearers in \\cap_gamma// is a process which has to be associated to some effective method. We are going to adopt a procedure based on analytic tableau calculus -- TA. We define the positive (negative) tableau for \\beta//, T+(\\beta//) (T-(\\beta//)) in symbols as those tableau whose top is given by \\beta// (¬\\beta//). We make use of positive tableau to look for antilogies and negative tableau to look for antilogies. Once we obtain a closed tableau we have to analyze if the generating formula contains subformulas whose positive or negative tableau result to be closed. If this is the case, the generating formula is not a bearer. As we can see the procedure is a bit tedious, anyhow it is not difficult to realize that is an effective decision method.

Definition 2: We say that a formula µ is completely analyzed iff each occurrence of a bearer in µ has been labeled with an auxiliary symbol -- put «*».

A completely analyzed formula shows by direct inspection which are the innermost and smaller pieces of its structure which presents conflicts of significance. Anyhow, we are supposed to obtain from these conflicts information about admissible valuations for the set in which they occurs, and we are supposed to do that by means of some conditions referred to sentential variables. It is of fundamental importance to realize that a bearer is associated at most to a closed tableau, positive or negative. The sentential variables responsible for the closure of this tableau can be founded inspectioning paths in the tableau. In what follows we shall speak of the tableau associated to a bearer to mention the tableau which identifies that subformula as a bearer.

Definition 3: Let T([?]) the tableau associated to a bearer subformula [?]. By the set L([?]) of bearer atoms of [?] we understand the closure under unions of the set whose members are the sets of atoms responsible in each path of T([?]) for the closure of this path.

This definition could seem more complex than expected. Nevertheless, it is justified by the impossibility of determining an unique set of atoms associated to the closure of a tableau. Let us consider the formula [?]=(A[?]B)&¬(A[?]B). The completely analyzed formula generated by it is [(A[?]B)&¬(A[?]B)]*, and its characteristic -- positive, in this case -- tableau T([?]) consists of two paths, one closed by the presence of {A,¬A} and another closed by {B,¬B}. If we consider the ways to avoid the closure of T([?]) we find that the sets {A}, {B} and {A,B} are equally responsible for the closure we try to block. This example makes clear the reason which carry us to adopt such a strange definition for the set L([?]) of bearer atoms of [?].

Definition 4: By the set \\cap_sigma//(\\beta//) of conflicting atoms of formula \\beta// we understand the union of all those sets L([?]) where [?] is a bearer in \\beta//.

We now introduce a definition decisive in what follows.

Definition 5: We shall say that a formula µ is a truth-specification of [?] iff it is of the form [?]s where s is a finite sequence in the Cartesian n-product {0,1}n, for n finite.

We are supposed to give an intended interpretation for truth-specificators, one which allows to read formulas of the form [?]s. Mimicking a recursive definition we shall read [?]s1 as saying that «[?]s is true» and [?]s0 as saying that «[?]s is false». We do not pretend to offer anything different from a technical device, but it has to be recognized that truth-specificators can play a role important by philosophical reasons. For instance, we think that a true contradiction says exactly [?]s1&[?]s0 and something similar could be said with respect to false tautologies. Truth-specificators seem to be of some utility when we have to express facts which differs form what is considered usual with respect to truth. Truth-specificators seem to be a way out for the norm in matters where truth plays a fundamental role.

So much for philosophy. The notions of bearer subformula, conflicting atoms, and finally, truth-specification of some formula allows to define a procedure to deal with conflicts of significance. We are going to solve the conflicts identified in a completely analyzed formula by means of its bearers making use of truth-specificators affecting to conflicting atoms. This solution suggests to make use of a translation function from standard classical propositional language to a propositional language accepting truth-specificators over sentential variables. It is pointless to say that truth-specifications over conflicting atoms yield some information about admissibility conditions for those valuations which makes significative the formula analyzed.

Our translation have to be defined in terms of a composition of two translation functions, t/[?] and g/[?], both relative to a set [?] of atoms contained in \\cap_sigma//(\\beta//) for a certain formula \\beta//.

Translation function t/[?]:

c0) If \\beta//s[?]Var, then:

i) if \\beta//s[?][?], then [(\\beta//s)i]t=(\\beta//)i, where i[?]{0,1}

ii) if \\beta//s[?][?], then:

a) [(\\beta//s)1]t=\\beta//

b) [(\\beta//s)0]t=¬\\beta//,

where [?] is a finite set of atoms.

c1) If \\beta//=¬\\alpha//, then:

i) [(¬\\alpha//)1]t=¬[(\\alpha//)0]< SUP>t

ii) [(¬\\alpha//)0]t=¬[(\\alpha//)1]< SUP>t

c2) If \\beta//=(\\alpha//[?][?]), then:

i) [(\\alpha//[?][?])1]t=[(\\alpha//)1]t< /SUP>[?][([?])1]t

ii) [(\\alpha//[?][?])0]t=[(\\alpha//)0]t< /SUP>&amp;[([?])0]t

c3) If \\beta//=(\\alpha//&amp;[?]), then:

i) [(\\alpha//&amp;[?])1]t=[(\\alpha//)1]< SUP>t&amp;[([?])1]t

ii) [(\\alpha//&amp;[?])0]t=[(\\alpha//)0]< SUP>t[?][([?])0]t

c4) If \\beta//=(\\alpha//[?][?]), then:

i) [(\\alpha//[?][?])1]t=[(\\alpha//)]t[?][([? ])1]t

ii) [(\\alpha//[?][?])0]t=[(\\alpha//)1]t< /SUP>&amp;[([?])0]t.

Translation function g/[?]:

c0) If \\beta// is of the form (\\alpha//)*, then [(\\alpha//)*]g/[?]=[(\\alpha//)1]t/[? ]

c1) If \\beta// contains some subformula of the form (\\alpha//)* , \\alpha// different from \\beta//, then:

i) \\beta//=¬[?], [¬[?]]g/[?]=¬([[?]]g/[?])

ii) [?]=(\\alpha//o[?]), [(\\alpha//o[?])]g/[?]=([\\alpha//]g/[?]o[[?]]g/[ ?]), where o[?]{[?],&amp;,[?]}

c2) If \\beta// does not contain any formula of the form (\\alpha//)*, then [\\beta//]g/[?]=[(\\beta//)1]t/[?],

[?] being a finite set of atoms.

The way in which t/[?] and g/[?] are related can be deduced from clauses c0) and c2) for g/[?].

Now we can define one of the most fundamental notions in this issue:

Resolution of \\beta//: By a resolution of a sentence \\beta// we understand the following formula:

[?][?][?]\\cap_sigma//(\\beta//)(\\beta//)g/[?].

We think that some detailed example could clarify the procedure just defined.

Example: Resolution of \\cap_gamma//={¬(p&amp;¬p),p,¬q}

1. ¬(p&amp;¬p)&amp;p&amp;¬q Characteristic formula of \\cap_gamma//.

2. ¬[(p&amp;¬p)]*&amp;p&amp;¬q by the routine for bearer subformulas.

3. \\cap_sigma//(\\beta//)={{p}} by the routine for conflicting atoms.

4. [¬[(p&amp;¬p)]*&amp;p&amp;¬q]g/{p }first step of the translation routine.

5. [¬[(p&amp;¬p)]*]g/{p}&amp;[p]g/ {p}&amp;[¬q]g/{p}by c1)ii of g/[?]

6. ¬[(p&amp;¬p)]*g/{p}&amp;[p]g/{p}& ;amp;[¬q]g/{p}by c1)i of g/[?]

7. ¬[(p&amp;¬p)]g/{p}&amp;[p]g/{p}& amp;[¬q]g/{p}by c0) of g/[?]

8. ¬(p&amp;¬p)1t&amp;p1 t&amp;(¬q)1t, [mód.{p}]by c2)

9. ¬(p1t&amp;(¬p)1t)& amp;amp;p1t&amp;q2tby c3) and c1)ii of t/[?]

10. ¬(p1t&amp;p2t)&amp; p1t&amp;q2tby c1)ii of t/[?]

11. ¬(p1&amp;p2)&amp;p1&amp;&# 172;qby c0)i and ii of t/[?].

Once again we take finite sets and its characteristic formulas as interchangeable notions when needed.

I mention some other examples without going into details. Let \\cap_gamma// be the set {(p[?]q),¬(p[?]q)}. The set of conflicting atoms associated to its characteristic subformula \\beta// is \\cap_sigma//(\\beta//)={{p},{q},{p,q}} what yields a resolution consisting in:

[(p1[?]q)&amp;p2&amp;¬q][?][(p[?]q1< /Sub>)&amp;¬p&amp;q2][?][(p1[?]q1)&amp;p2&amp;q2].

One of the most salient features of the way we are dealing with conflicts of significance is that it proceeds stepwise. We first localize the innermost bearer subformula of a given formula -- finite set -- and then we determine a resolution for this formula accordingly to the appropriate definition. Nothing prevents that a resolution of a formula could contain itself conflicts of significance of an higher order. An example easy to understand is given by \\cap_gamma//={(p&amp;¬p),¬(p&amp;¬p)}. Its resolution yields the formula (p1&amp;p2)&amp;¬(p1&amp;p 2) which is not free of significance conflicts. This time our conflicts affect to truth-specifications of standard atoms which can be taken as new atoms if is necessary.

Anyhow, the conflict showed by this formula can be solved iterating the entire process once again. This time the resolution will be the formula (p11&amp;p21)&amp;(p12[?]p22).

Definition 6: By the last resolution of a formula \\beta//, \\beta//\\sigma// in symbols, we mean the resolution free of bearer subformulas.

It is quite obvious that last resolutions of formulas constitute the basic elements to define the variety of consequence we were looking for. Nevertheless, the process of resolution just defined does not make mention to arguments, it only deals with formulas and finite sets of formulas. But an standard argument is nothing different form an ordered pair <\\cap_gamma//,\\beta//> of sets of formulas and formulas respectively, so that some relation can be expected. If we ignore order and limit ourselves to finite sets of premises, we obtain a set \\cap_gamma//U{\\beta//} which seems to offer the natural context to execute our resolution procedure. The relevant information supplied by this set is that referred to conflicting atoms, and once we have a set \\cap_sigma//(\\cap_gamma//U{\\beta//}) generated by some argument \\cap_gamma//|-\\beta//, for \\cap_gamma// finite, we can look for successive resolution of the formulas in the argument. Eventually we can reach a resolution for premises and conclusion which satisfy conditions imposed by definition 6. The resulting argument is the translation of the original argument in a sentential language allowing specifications of atoms and constitutes a lecture free of conflicts of significance.

Naive consequence: Let \\cap_gamma// be a finite set of formulas and let \\beta// be a formula, then,

\\cap_gamma//\\beta// iff \\cap_gamma//p|-cpc[?]p, with respect to the set \\cap_sigma//(\\cap_gamma//U{\\beta//}), being \\cap_gamma//p and [?]p the last resolutions of the characteristic formula of \\cap_gamma// and \\beta// respectively.

I omit the proof that the resolution method is effective in the sense that it stops reaching an argument formed by the last resolutions of all those formulas in the argument under examination.

Section 5. Some comments about naive consequence and its applications

I do not justify the utility of this variety of consequence appealing to some successful research program outside the main topics in Logic. On the contrary, I think that naive consequence can bear its fruites once traditional problems in logic are revisited under new perspectives.

Naive consequence offers a partial and paraconsistent variety of a consequence relation preserving classical inferences when possible, that is, if there is not explicit significance conflicts.

Priest in Priest [1979] introduces the notions of «valid» and «quasi-valid inference» in the context of a revision of Gödel's incompleteness Theorems from the point of view of paraconsistent logic. The argument given by Priest shows that a paraconsistent interpretation of Gödel's incompleteness Theorems can be carried out without going into triviality. Now Gödel sentence is not independent from Peano Arithmetic, it results to be paradoxical and therefore, following Priest, an acceptable consequence of PA axioms. Unprobability of PA-consistency in PA offers another example of paradoxical sentence therefore removing a considerable amount of classical orthodoxy.

The success of this paraconsistent threat to arithmetical orthodoxy hangs on the adequacy of the elementary logic supplied in the place of CPC. Nevertheless, the Logic LP developed by Priest fails in some relevant aspects. For instance, this logic does not recognize the intended difference between valid and quasi-valid inferences. LP is a paraconsistent logic obtained by means of a revision of admissibility conditions over assignments an therefore, LP is paraconsistent everywhere. Its characteristic consequence relation cannot distinguish those inferences acceptable only in consistent contexts of deduction -- where no paradox is present -- from those inferences valid everywhere. LP rejects MP and DS because paradoxes can occur in premises allowing to conclude something false. Let us take as instance of DS the argument: A,&#32;¬A[?]B&#32;|-&#32;B. The reason to reject its validity is that nothing prevents, in a paraconsistent Logic as LP, that A could be true and false at the same time, what destroy the basis to conclude B. From the point of view of naive consequence an instance of DS such as A,&#32;¬A[?]B&#32;|-&#32;B remains valid. No conflict of significance is present, or at least, no such conflict has been made explicit, so that we are not supposed to proceed under considerations not supplied by the argument under examination. However, if we look for the last resolution of a variant for DS such as ¬A,&#32;A,&#32;¬A[?]B&#32;|-&#32;B we see that it is naively rejectable. Now we know that A has been taken as a true and false sentence and this is enough to validate Priest argument. If we compare both instances of DS we see that naive consequence does not obey monotonicity but fortunately this is not a very popular property nowdays, at least if we consider the relative success of non-monotonic logics.

I bring these comments to a close by pointing out some problems we shall be faced with in further developments. Naive consequence has been set up in an indirect way. No semantics has been defined nor inference rules or axioms have been offered. From the point of view of orthodoxy, such as it is resumed by the dogmas listed above, we have not a genuine mathematical interpretation of any consequence relation. At most we have a translation which defines some secondary mathematical object probably having to do with consequence.

The definition of naive consequence depends on the resolution procedure and classical consequence relation. Resolution is a syntactical tool based on finitary considerations. Once we have obtained the last resolution of an argument in does not matter if we take classical derivability or classical entailment to establish naive consequence in terms of the resolution just mentioned. The resolution has to be effective in a sense that makes of naive consequence a syntactic notion whose semantic mate could not be easy to define. Nevertheless, a resolution of an argument is nothing different from another argument in a language allowing truth-specificators for atoms. But this notion of truth-specificators has a semantic flavor I do not want to deny. We should think of a naive analysis of an argument as a procedure which has to determine -- in an effective way -- first the admissibility conditions for relevant valuations and then everything goes classically. We can say that naive consequence changes the logic used to analyze arguments always that the context given by an argument requires such a change. To make clear the point, we can think of naive consequence as a relation which goes along the entire hierarchy C0,...Cn, n<[?] of Arruda and da-Costa looking for the most convenient system for the occasion. Technically we can say that naive consequence is like C[?] a non-finitely trivializable logic, but it does retain like Cn for n<[?] an strong negation.

The intended relation between naive semantics and the hierarchy C0,...Cn goes beyond paraconsitency including also the partial behavior exhibited by naive consequence. Anyhow this relation only can be taken as a metaphor, in the limit it does not work.

We can think of resolution method as a procedure which serves to identify the logic that in a chain of the kind of Arruda's results relevant to judge an argument. The maximal length of truth-specificators in the last resolution of formulas in an argument can be taken as an index for the position that this logic takes in the ordered chain. This suggest an inductive process which can be of a fundamental utility to extend resolution method to non-finite sets. If we define resolution process inside an inductive procedure we can expect for a fixed-point theorem. We are supposed then to extend the resolution method allowing for infinite truth-specifications for atoms and prove that any set can reach a point where no further conflicts of significance occur. Anyhow we can think of sets so defined, that conflict of significance always persist, for instance,

Example: Put µ0+=\\beta//, and µ0-=¬\\beta//. Now we define inductively µi+1+i+&amp; 2;µi-, and µi+1-=¬µi+1+. We obtain the sets \\cap_gamma//j={µi+/0\\le//i<j}U{&# 181;i-/0\\le//i<j}, and finally let \\cap_gamma//[?] be the union of each \\cap_gamma//i. It is quite obvious that the resolution process for this set does not seem to reach a fixed-point.

All these comments show that naive consequence does not fit well into the framework of standard semantics and, in general, of standard definition strategies for logical consequence. However, I think that it is not the last word to say about naive consequence, but the effect of a significative departure from tradition. I guess that some effort can yield genuine semantics and proof-theoretic tools to deal with naive consequence, and so to prove its utility in partial and paraconsistent revision programs for fundamental paths of contemporary mathematics.

References

da Costa, N.C.A. &amp; Marconi, D. [1989]: «An Overview of Paraconsistent Logic in the 80's», The Journal of Non-Classical Logic, vol.6, nº1, pp. 5-32.

Humberstone, I.L. [1988]: «Heterogeneus Logic», Erkenntnis, 29, pp.395-435.

Marraud, H. [1994]: Curso de Doctorado, Dept. de Lingüística, Lenguas Modernas, Lógica y Filosofía de la Ciencia. U.A.M.

Priest, G. [1979]: «The Logic of Paradox», Journal of Philosophical Logic, 9, pp.415-435.

Priest, G. [1984]: «The Logic of Paradox Revisited», Journal of Philosophical Logic, 13, pp.219-235.

Routley, R. [1979]: «The Choice of Logical Foundations: Non-Classical Choices and Ultralogical Choices», Studia Logica, vol.XXXIX, nº1, pp.77-98.

Enrique Alonso

Universidad Autónoma de Madrid

E -- 28071 Madrid, Spain