Sorites (Σωρίτης), ISSN 1135-1349

http://www.sorites.org

Issue # 20 — March 2008. Pp. 141-156

Hypothesis Testing Analysis

Copyright © by Mikael Eriksson and Sorites

 

Hypothesis Testing Analysis

by Mikael Eriksson

────────────────────

1. Introduction

              Looking directly into some technicalities of the philosophical empirical tradition, the essence of hypothesis confirmation and hypothesis falsification is viewed such that hypotheses are based in, and perhaps are stronger than, the evidence that experience in turn provides. Confirmation singles out those hypotheses that base in, and perhaps go beyond, the evidence. Falsification, on the other hand, singles out those hypotheses being inconsistently based in evidence. The very same traditions recognise the well-known consequences that confirmation and falsification involve induction and correlation problems. Confirmation tends to include more information than the evidence provides, while falsification is not strong enough to generally tell what is false in the falsified hypothesis. As if this would not be problem enough, it is also generally viewed that confirmation and falsification clash with each other. These problems seem to put confirmation and falsification in a pretty awkward place of fulfilling the conditions of being a set of logical constants validating the natural phenomenon of empirical knowledge gaining activities.

              For starters, look at the notion of falsification. Popper talked about a hypothesis h being better corroborated than a hypothesis h' on some evidence e, in the sense that h is capable of being tested for falsification by more observational statements than h'. (Popper,1972) Corroboration includes this falsification, which in turn has the consequence that some given observation, testing a hypothesis, falsifies it without being able to show which part of the hypothesis is false. The accustomed logical understanding of this is as follows. The predicate «e falsifies h» boils down to the formula «h implicates e». A well-known logical theorem says that in denying a formula consequent, it does not follow which part of the antecedent is false. Therefore, when e falsifies h and h is a complex formula, it is not clear which part of h is false. However, looking at the natural idea of falsification, it clearly indicates the idea of proper evidence and hypothesis correlation, demanding more of «e falsifies h» than the implication analysis shown. Following this, it seems to me that the suggested Popper logical understanding of falsification needs revision. Rather, how does h and e correlate?

              Now, what has the empiricists to offer? The vast empiricist tradition searches for methods of gaining empirical knowledge along the notion of induction, taking off in the Hume tradition. (Hume,1777) In the early 20th century the empirical ideas took further steps, by understanding confirmation as testing, which by that time was viewed as an inductive and non-demonstrative inference from evidence to hypotheses. Later on, empiricists developed the basic confirmation idea understanding into demonstrative probability, still emulating induction. There, Carnap stipulated that the formula «evidence confirms a hypothesis» means a hypothesis becoming accepted scientific knowledge and nearly (increasingly) certain in the light of one (several) bodies of evidence — absolute (relevance) confirmation. (Carnap,1962) This induction idea was governed by the assumptions that hypotheses are based in evidence, and might be stronger than that evidence. For those cases when the proper hypotheses do not go beyond the evidence, the hypotheses and evidence are indeed identical. I claim that this seemingly fair view is false and leads to paradoxes as section 5.1 will show.

              With this outline I have tried to show the problems involved in both empirical accounts — each aiming at the trophy of validating the natural phenomenon of empirical knowledge gaining. The rising question is whether there might be another way to define this empirical set of constants. We know that the falsification account claims formulas on the form «hypothesis h being better corroborated than hypothesis h' on evidence e». Empiricists, on the other hand, claim formulas on the form «evidence e confirms hypothesis h». Putting it this way, it seems to me that both accounts put their efforts on analysing the relation between e and h, while I think the emphasis first need to be on e and h in separate. Developing a system along this view give rise to a new terminology which I hope will bring some light to both the Popper notion of falsification modus tollens and the Carnap notion of confirmation probability.

2. Natural language analysis

2.1 Idea

              In pursuing empirical knowledge reasoning, I search for valid argument schemata characterising the scientific everyday language. A usual sentence is «I am interested in this phenomenon or this hypothesis». Out of such ordinary scientific language, it could be possible to single out argument schemata. One way of unveiling these suggested schemata is to put the following question. What makes scientists to accept and be nearly certain of a hypothesis? It seems clear that scientists, after making their conjectures, use evidence to test their hypotheses, aiming at confirming or falsifying the hypotheses. However, the clearness is not preserved when it comes to understand the involved phenomenon of testing. Aiming at this seemingly opacity, I believe that this notion should be understood in terms of correlation of hypothesis and evidence as well as understanding the meaning of hypothesis and evidence. Consider this ancient example. In my view, when Galilei claimed that there is a halo around the moon (not the face of God turned to the observer as claimed by contemporaries), he used his perception to collect evidence. He also made a hypothesis claiming the idea of «halo around the moon». Finally, he used his human ability to relate his hypothesis with his evidence. In a language view, I refer to these as evidence formula, hypothesis formula, and procedure formula. I put these three formulas together in the following way. The procedure formula tests the hypothesis-formula «halo around the moon» with the evidence-formula «halo around the moon».

              With this view of testing established, I go further by analysing what these three formulas are. By evidence-formula I mean a logical formula describing the evidence claim (for instance the perception of some entity). By hypothesis-formula I mean a logical formula describing the hypothesis claim (for instance an idea claimed by someone). I mean that the evidence and hypothesis formulas are distinct in character but similar to each other, having the same simple and logical form. Finally, the procedure formula I mentioned is a logical construction of these evidence and hypothesis formulas. This logical construction will depend on the logical form of the evidence and hypothesis formulas, as section 3 shows. Summing this up, I view and analyse testing as including procedures of hypothesis and evidence formulas.

              This analysis of testing makes the first face of empirical knowledge-gaining (besides conjecturing). Note that this face of testing involves the principle of similar evidence and hypothesis formulas, which later will be shown to explains the rationalist's evidence and hypothesis correlation problem. I believe there is a second face of empirical knowledge-gaining, which I claim to be hypothesis inclusion. Now, this means that my viewed two-faced formulation of empirical knowledge-gaining divides a scientist hypothesis claim into two parts. The first part is a sub hypothesis, which is tested by evidence. The second part is the rest of the hypothesis, which belongs to the intuition of the scientist, but is not present in the evidence. This way of understanding empirical knowledge-gaining explains the empiricist's induction inference problems, as I will show.

              Summing the idea up, I view confirmation as constructive empirical knowledge-gaining and I analyse it in terms of testing and inclusion. This can be expressed by the conjunction of the two formulas «this evidence tests this hypothesis-part» and «the full hypothesis compares with that hypothesis-part». Along this view, I fit in falsification as reductive empirical knowledge-gaining and I analyse it in terms of reductive testing, as below will show. In this way, both confirmation and falsification is defined in terms of testing, seemingly opening a common ground for both the Carnap and Popper terminologies.

2.2 Analysis

2.2.1 Distinctiveness

              The above show that I understand testing as including the condition of evidence and hypothesis formulas being distinct from each other. Going into this in detail, I will say that by formulas I mean logical formulas, where the evidence-formula simple parts refer to simple evidence-claims, and the hypothesis-formula simple parts refer to simple hypothesis-claims. By the term distinct, I mean that the simple parts of evidence-formulas are distinct in kind from the simple parts of hypothesis-formulas.

              This view of distinctiveness bases in my intuition of what hypothesis and evidence are. To me, evidence-claims and hypotheses-claims are human activities about the natural world. A natural entity differs from both the human perception of it and the human understanding of it. Evidence-claims (related to science) are claims focusing upon natural entities from a perceptual point of view. Analogously, hypothesis-claims (related to science) are claims focusing upon natural entities from an idea point of view. These different points of views of the natural entity include that the evidence have at least some quality different from the hypothesis. This makes evidence distinct from hypotheses.

              Let's see how this distinctiveness applies to the Galilei example. When Galilei made a claim that there is a halo around the moon (not the face of God turned to the observer), he focused upon the natural situation of the moon from his perceptual point of view. His perception included biochemical properties. The very same properties were not included in his hypothesis-claim that there is a halo around the moon. This makes his evidence claim distinct from his hypothesis claim. Relating to Carnap, he also points out on page 12 that neurological factors determine inductive reasoning. (Carnap,1971) Still, he claims that «e implicates h», if e and h refer to state-descriptions having inclusion relation. That is, h is included in e. Understanding the Carnap mentioned neurological factors as I do, makes his claim contradictory, as e and h has some neurological properties distinct from each other. However, it would be possible to rewrite the Carnap claim in the following manner. My claim of hypothesis and evidence distinctiveness is consistent with world-states including the two kinds of evidence and hypotheses referring to the same situation in the world-state. The evidence, having its properties, is part of a two-faced world-state alongside with the hypothesis, having some distinct properties. However, contrary to Carnap the evidence and hypothesis relation, due to the distinctiveness, is rather conjunctive than implicative. To formalise this evidence E and hypothesis H distinctiveness, I use the following terminology. (A ╞ B means B is true in A.)

E,H ╞ F,G   iff   E ╞ F , H ╞ G

E,H ╞ F ∧ G  iff  E ╞ F and H ╞ G

E,H ╞ ¬F  iff  not Ε,Η ╞ F

E,H ╞ F → G  iff   Ε,Η ╞ ¬(F ∧ ¬G)

2.2.2 Similarity

              After analysing the distinctiveness of e and h I go on investigating the e and h relation. Rather, I understand testing as including the condition of an evidence-formula F and a hypothesis-formula G being similar in simple form and the same in logical form. By similar simple form I mean that the logically simple evidence claim F of a natural entity and the logically simple hypothesis claim G of that natural entity is similar. Same logical form means that F has the same logical structure as G. I set a terminology for this, as follows.

The indexed formulas Fi and Gi mean that F has similar simple form and same logical form as G.

2.2.3 Matching, procedure and equivalence

              I also understand the notion of testing as including the matching condition. Matching is the combination of the condition of distinct evidence and hypothesis formulas with the condition of similar evidence and hypothesis formulas. Matching M is the conjunction of the distinct evidence and hypothesis formulas, having similar simple and same complex logical form.

M(Fi,Gi)   iff   Fi ∧ Gi

              To me, the notion of testing also includes a procedure showing how the test is about to be done — its procedure. In language, the procedure is defined as a logical construction of the involved matching predicates. That is, the logical construction depend upon which arguments the matching predicates have. For instance, the procedure of testing a conjunction formula is to first test one of the conjuncts and then to test the second conjunct.

Procedure of testing Gi ∧ Gj is the procedure of testing Gi and the procedure of testing Gj

Procedure of testing Gi ∨ Gj is either the procedure of testing Gi or the procedure of testing Gj

              My final understanding of testing is that it involves an expression condition, inspired by the elegant natural formulation «evidence tests hypothesis». This natural expression form denotes any test procedure, indifferent of its logical construction. In formal language, this behaves much like a Frege predicate showing free variables within parentheses. Here, the predicate variables are instead exchanged with formulas. I believe there is a brilliant natural language focusing feature involved here which I extract for this article purpose as follows. The test-predicate T, having the simple argument Fi and Gi, defines as the simple test procedure M(Fi,Gi).

T(Fi,Gi) iff M(Fi,Gi)

              Test procedures differ with the hypotheses being tested because the logical constructions of the hypotheses differ. This means that the logical properties of the test predicate T differ with the logical form of the arguments, as section 3.3 will show. This has the unexpected consequence that a system built on these premises include cases where two test predicates have logically equivalent formulas as arguments without the test predicates having the same logical properties. This phenomenon has special consequences for the well-known raven paradox, as section 6 will show.

3. Formal suggestion

My basic aim in this article is to view the idea of empirical knowledge gaining in terms of confirmation and falsification, but give both those notions a base in the notion of testing. I will now try to fully formalise the notion of testing into a system and then derive the notion of confirmation and falsification from that system.

3.1 Simple test predicate rule

For starters, using natural deduction style, the section 2.2 formally includes below deduction.

 

E ├ Fi   and   H ├ Gi

I∧

E,H ├ Fi ∧ Gi


The formation rule of matching is as follows.

FM

E,H ├ Fi ∧ Gi

 

E,H ├ M(Fi,Gi)


Let the indexed formulas Fi and Gi be «the evidence claim F has similar simple form and same logical form as the hypothesis claim G». Let Fi be a simple formula in system E and let Gi be a simple formula in system H. Let M be matching. Let M(Fi,Gi) be «matching M of Fi and Gi denotes conjunction formula Fi ∧ Gi focusing upon Fi and Gi». Read M(Fi,Gi) as Fi matches Gi.

IM

E,H ├ Fi ∧ Gi

 

E,H ├ M(Fi,Gi)


EM

E,H ├ M(Fi,Gi)

 

E,H ├ Fi ∧ Gi


The section 2.2 also includes a way of expressing matching, formally expressed as the test predicate T(Gi). The test predicate T takes hypothesis formula Gi, the outstanding part of the formula M(Fi,Gi), as argument.

FT

E,H ├ M(Fi,Gi)

 

E,H ├ T(Fi,Gi)


Fi and Gi are simple formulas. M(Fi,Gi) is a formula in the systems E,H. Let T be testing. Let T(Fi,Gi) be «testing T of Fi and Gi denotes M(Fi,Gi) focusing upon Fi and Gi». Read T(Fi,Gi) as Fi tests Gi.

IT'

E,H ├ M(Fi,Gi)

 

E,H ├ T `(Gi)


Read T `(Gi) as                                                              test of Gi

              I will now reformulate the rule IT ` to below rule for more natural reading of testing.

IT

E,H ├ M(Fi,Gi)

 

E,H ├ T(Fi,Gi)


Read T(Fi,Gi) as                                                            Fi tests Gi

ET

E,H ├ T(Fi,Gi)

 

E,H ├ M(Fi,Gi)

 

3.2 Probability rule

              Carnap viewed confirmation as being defined as probability, which can be seen in his quantitative concept «evidence supports hypothesis to some degree». (Carnap,1962) In this article I surely claim that probability is part of the quantitative empirical knowledge gaining intuition, but it does not coincide with the qualitative notion of confirmation. Along this line, I suggest the probability notion «the probability of evidence testing hypotheses», distinguishing testing from probability. Here, the quantitative empirical knowledge intuition passes over to the notion of probability, leaving testing a purely qualitative empirical knowledge intuition. In this way, I can nest probability formulas inside test formulas and vice versa as the following two examples will show. In the first example, I test T that at least one individual being a swan Gi and white Gj; T(Fi ∧ Fj , Gi ∧ Gj). (Section 3.3 shows this T conjunction case.) I also test precisely one thousand individuals being white; T(Fk,Gk). This means two tested hypotheses. Now, the probability P of there being tested white individuals that also are tested as swans, is at least one of one thousand; P( T(Fi ∧ Fj , Gi ∧ Gj) | T(Fk,Gk) ). (Below IP applied to 3.3 explains this formulation.) In this way, I talk about tested hypotheses, and probability applied to these. In the second example, I test the following probability hypothesis. At least one of one thousand whites is a swan. That is, P( Gi ∧ Gj | Gk ). To test this I need an evidence formula similar to the hypothesis. I use the evidence arguments of above T predicates, and apply probability to these. That is, P( Fi ∧ Fj | Fk ). Now, the evidence formula tests the similar hypothesis formula — T( P( Fi ∧ Fj | Fk ) , P( Gi ∧ Gj | Gk ) ). This shows the relationships between the two notions of testing and probability.

IP

S ├ F

 

S ├ P(F)

 

, where P(Fi ∨ Fj) = P(Fi) + P(Fj) - P(Fi ∧ Fj) and P(Fi ∧ Fj) = P(Fi | Fj) * P(Fj) and the conditional P(Fi | Fj) = P(Fi ∧ Fj) / P(Fj), as well as its converse that if P(Fi | Fj) then P(Fj | Fi) = P(Fj ∧ Fi) / P(Fj ∧ Fi) + P(¬Fj ∧ Fi).

3.3 Complex test predicates

              The section 2.2 makes clear that the test predicates emulate the test part of the natural phenomenon of confirmation. The section 3.1 IT rule singles out the test predicates having simple arguments. Below will define the remaining test predicates, having logically complex evidence and hypotheses.

Τ∧                                                                    __Ε,Η ├ T(Fi ∧ Fj , Gi ∧ Gj)

__________⇔

 

Ε ├ Fi ∧ Fj  ,  H ├ Gi ∧ Gj

I∧

Ε,Η ├ (Fi ∧ Fj) ∧ (Gi ∧ Gj)

IM

Ε,Η ├ M(Fi ∧ Fj , Gi ∧ Gj)

 

 

Τ→                                                                      Ε,Η ├ T(Fi → Fj , Gi → Gj)

__________⇔

 

E ├ Fi  ,  E ├ Fi → Fj

H ├ Gi  ,  H ├ Gi → Gj

I∧

E,H ├ (Fi → Fj)  ∧  (Gi → Gj)

IM

Ε,Η ├ M(Fi → Fj , Gi → Gj)

 

 

Τ∨                                                                  ____Ε,Η ├ T(Fi ∨ Fj , Gi ∨ Gj)

___________⇔

 

Either_Ε ├ Fi  or  Ε ├ Fj  ,  either H ├ Gi  or  H ├ Gj

I∨

Ε,Η ├ (Fi ∨ Fj) , (Gi ∨ Gj)

I∧

Ε,Η ├ (Fi ∨ Fj) ∧ (Gi ∨ Gj)

IM

Ε,Η ├ M(Fi ∨ Fj , Gi ∨ Gj)

 

Τ¬                                                                       _____Ε,Η ├ TFi , ¬Gi)

_________⇔

 

Ε ├ Fi  ,  H ├ Gi

 

:    ,     :

 

⊥    ,     ⊥

Ε ├ ¬Fi  ,  H ├ ¬Gi

I∧

Ε,Η ├ ¬Fi ∧ ¬Gi

IM

Ε,Η ├ MFi , ¬Gi)

 

Τ∀______E,H ├ T( ∀vFi(v) , ∀uGi(u) )

___________⇔

 

Ε ├ Fi(c)  ,  H ├ Gi(d)

I∀

Ε ├ ∀vFi(v)  ,  H ├ ∀uGi(u)

I∧

Ε,Η ├ ∀vFi(v) ∧ ∀uGi(u)

IM

E,H ├ M( ∀vFi(v) , ∀uGi(u) )

 

, for arbitrary c in E and arbitrary ├ in H.


Τ∃                                                             _______E,H ├ T( ∃vFi(v) , ∃uGi(u) )

___________⇔

 

Ε ├ Fi(c)  ,  H ├ Gi(d)

I∃

Ε ├ ∃vFi(v)  ,  H ├ ∃uGi(u)

I∧

Ε,Η ├ ∃vFi(v) ∧ ∃uGi(u)

IM

E,H ├ M( ∃vFi(v) , ∃uGi(u) )


, for some c in E and some d in H.

              The T disjunction case is exclusive disjunction. In the T quantification case, the variables are relative their E and H domains. The conjunction case for understanding above test predicates is as follows. Suppose the hypothesis conjunction and find an evidence conjunction matching the hypothesis. Introduce T to denote the performed logical steps and read it as «evidence tests hypothesis».

              The system developed above has a special effect on the well-known substitution principle. In that principle, any formula including A → B means the same as the formula instead including ¬B → ¬A. Now, check this in the system above. Suppose that the formula including A → B is above T implication case. The T implication case definition shows that T(A → B) includes A in separate. It is easy to see by the same T implication definition that T(¬B → ¬A) includes ¬B in separate. Now, take the three formulas A, A → B, and ¬B included in above two T predicates. These formulas show that a contradiction follows from claiming that ¬B → ¬A substitutes A → B in T. Therefore, the substitution principle does not hold for extensional logic with T predicates added.

3.4 Test predicate logical properties

              I will now show some logical consequences of the 3.1 and 3.3 test predicates.

Simple Fi and Gi                                                   T(Fi,Gi)  ↔  (Fi ∧ Gi)

Conjunction                                       T(Fi ∧ Fj , Gi ∧ Gj)  ↔  ( T(Fi,Gi) ∧ T(Fj,Gj) )

Implication                                         T(Fi → Fj , Gi → Gj)  ↔  ( T(Fi,Gi) ∧ Gi → Gj )

Disjunction                                   TFi ∨ Fj , Gi ∨ Gj )  ↔  (either T(Fi,Gi) or T(Fj,Gj))

Contradiction                                             TFiGi) → T(⊥) )  →   T(Fi,Gi)

Negation                                                      T(Fi,Gi) → T(⊥) ) → TFiGi)

T(Fi,Gi) ∧ TFiGi) )  →  T(⊥)

All quantification                T( ∀vF(v) , ∀uG(u) )  ↔  TF(c) , G(d) ), for arbitrary c and d.

Existence quantification    TFi(c) , Gi(d) )  →  T( ∃vFi(v) , ∃uGi(u) ), for some c and d.

( ( ( T(Fi(c) , Gi(d)) → I )  →  (T(∃vFi(v) , ∃uGi(u)) → I) )  ∧

T(Fi(c) , Gi(d))  →  T(∃vFi(v) , ∃uGi(u)) ) )  →  I

Probability           ( P( T(Fi,Gi) ) ∧ (T(Fi,Gi) ↔ T(Fj,Gj)) )  →  P( T(Fj,Gj) )

( T( P(Ai) ) ∧ P(Ai) = P(Bi) )  →  T( P(Bi) )

              With this formalism hanging over our heads, here is an intuitive example of understanding above formulas. T(Fi ∧ Fj , Gi ∧ Gj) means that the evidence formula tests the hypothesis formula, following the conjunction test procedure (logical steps). The natural language reading of above conjunction formula is «testing a conjunction formula means that each conjunct tests in separate». I will not prove all these theorems due to the length of this paper. I will only prove the conjunction case.

Suppose

Ε,Η ├ T(Fi ∧ Fj , Gi ∧ Gj)

(i)

By 3.3 T∧ def. below (ii)-(iii)

Ε ├ Fi ∧ Fj  ,  H ├ Gi ∧ Gj

(ii)

I∧

Ε,Η ├ (Fi ∧ Fj) ∧ (Gi ∧ Gj)

 

IM

Ε,Η ├ M(Fi ∧ Fj , Gi ∧ Gj)

(iii)

EM

Ε,Η ├ (Fi ∧ Fj) ∧ (Gi ∧ Gj)

 

E∧

Ε ├ Fi ∧ Fj  ,  H ├ Gi ∧ Gj

 

E∧

Ε ├ Fi , Ε ├ Fj   ,  H ├ Gi , H ├ Gj

 

Permutation

Ε ├ Fi , H ├ Gi  ,_ Ε ├ Fj , H ├ Gj

(iv)

I∧

Ε,Η ├ Fi ∧ Gi  ,  E,H ├ Gj ∧ Gj

 

IM

Ε,Η ├ M(Fi,Gi)  ,  Ε,Η ├ M(Fj,Gj)

(v)

By T def. above (iv)-(v)

Ε,Η ├ T(Fi,Gi)  ,  Ε,Η ├ T(Fj,Gj)

 

I∧

Ε,Η ├ T(Fi,Gi) ∧ T(Fj,Gj)

(vi)

I→_(i),(vi)

Ε,Η ├ T(Fi ∧ Fj , Gi ∧ Gj)  →  ( T(Fi,Gi) ∧ T(Fj,Gj) )

 

 

Suppose

Ε,Η ├ T(Fi,Gi) ∧ T(Fj,Gj)

(i)

E∧

Ε,Η ├ T(Fi,Gi)  ,  Ε,Η ├ T(Fj,Gj)

 

By T∧ def. (ii)-(iii)

Ε ├ Fi , H ├ Gi   ,   E ├ Fj , H ├ Gj

(ii)

I∧

Ε,Η ├ Fi ∧ Gi   ,  E,H ├ Fj ∧ Gj

 

IM

Ε,Η ├ M(Fi,Gj)  ,  Ε,Η ├ M(Fj,Gj)

(iii)

EM

Ε,Η ├ Fi ∧ Gi   ,  E,H ├ Fj ∧ Gj

 

E∧

Ε ├ Fi , H ├ Gi   ,   E ├ Fj , H ├ Gj

 

Permutation

Ε ├ Fi , E ├ Fj , H ├ Gi , H ├ Gj

(iv)

I∧

Ε,Η ├ Fi ∧ Fj  ,  E,H ├ Gi ∧ Gj

 

IM

Ε,Η ├ M(Fi ∧ Fj , Gi ∧ Gj)

(v)

By T def. (iv)-(v)

Ε,Η ├ T(Fi ∧ Fj , Gi ∧ Gj)

(vi)

I→_(i),(vi)

Ε,Η ├ ( T(Fi,Gi) ∧ T(Fj,Gj) )  →  T(Fi ∧ Fj , Gi ∧ Gj)

 

 

3.5 Complex test predicate rules

Based on 3.3 and 3.4 it is now possible to form natural deduction rules for the complex T predicate. I show this for the conjunction case.

FT∧

Ε,Η ├ T(Fi,Gi) ∧ T(Fj,Gj)

 

Ε,Η ├ T(Fi ∧ Fj , Gi ∧ Gj)

 

Fi is a formula in E and Gi is a formula in H. T(Fi,Gi) is a formula in the systems E,H. Let T be testing. Read T(Fi ∧ Fj , Gi ∧ Gj) as Fi ∧ Fj tests Gi ∧ Gj.

IT∧

Ε,Η ├ T(Fi,Gi) ∧ T(Fj,Gj)

 

Ε,Η ├ T(Fi ∧ Fj , Gi ∧ Gj)

 

ET∧

Ε,Η ├ T(Fi ∧ Fj , Gi ∧ Gj)

 

Ε,Η ├ T(Fi,Gi) ∧ T(Fj,Gj)

 

3.6 Confirmation

              Until now I have definied the notion of testing in a formal system. The remaining task is to derive the notion of confirmation and falsification in this system. I start with the notion of confirmation. In my view, confirmation includes more than the above test notion. Scientists often claim hypotheses that after some consideration are shown to partly base in evidence and partly base in their own intuition. Therefore, I view testing as the confirmation base step and complement it with an induction step that will deal with comparing the tested hypothesis part with the full hypothesis. In natural language this will sound like «the evidence Fi tests T the hypothesis-part Gi« and «the full hypothesis Gj compares with Gi». This conjunction is then expressed by «Fi confirms Gi» using the natural language feature of focusing mentioned in section 2.2.3. Formally, the confirmation C base step is testing T the hypothesis part Gi using the evidence part Fi. The confirmation C induction step is a measure function P' of Gi and Gj.

IC

Ε,Η ├ T(Fi,Gi)  ∧  P'(Gi|Gj)

 

Ε,Η ├ C(Fi,Gi)

 

, where P' is a measure tool used to measure the difference between Gi and Gj.

 

Read C(Fi,Gi) as                                                        Fi confirms Gi

3.7 Falsification

              Traditionally, confirmation is an empiricist term about hypothesis-support, while falsification is a rival rationalist term about hypothesis-denial. However, in my analysis both terms have testing T in common. In consequence, it seems that it is possible to define both these concepts without rival views emerging. Rather, they go side by side forming a three-unity with testing as base.

              I view the natural notion of falsification as reductive testing. That is, falsification is a way of focus upon the evidence and hypothesis parts of the test procedure, where the evidence denies the hypothesis. Formally, this is formulated as follows as section 3.4 test-predicate negation-introduction.

 

IC

E,H ├ ( T(Fi,Gi) → T(⊥) )_ →  TFiGi)

 

E,H ├ F(¬Fi,Gi)

 

, where F focuses upon ¬Fi and Gi in the first row formula

 

Read F(¬Fi,Gi) as                                                      ¬Fi falsifies Gi


              The first row antecedent includes that the test argument Fi implicates contradiction ⊥. Considering the T index-condition, the same applies to Gi. By negation-introduction and test-introduction, the consequent TFiGi) follows. (Accordingly, E and H revise to exclude Fi and Gi.) In the falsification-introduction, the second row focuses upon the evidence and hypothesis parts in the first row. That is, the rule IF starts with a test procedure going wrong and ends up with focusing upon the evidence and hypothesis parts of that procedure. Falsification application is not used for hypothesis choice, but for evidence and hypothesis revision. Here is an example. If a claimed individual has certain properties, and some is falsified. Then, that individual is falsified.

4. Semantics

The language shown above has a reference as follows. Language L includes symbols for constants c, variables v, functions f, assignment function a, satisfication s, and logical operators ∧, ∨, →, ↔, ¬, ∀, ∃. U is the set of terms, that is constants C, variables V, and functions. S is semantics. M is model set.

4.1 Notation convention

Read the formalism                                                   f : A → B [X]

as                                               function f from set A to set B such that condition(s) X

4.2 Assignment function

a  :   V → C [a(vi)  = ci]

4.3 Interpretation function

i  :  C  →  S [(a),(b),(c)]

(a)  i(cj) = sj

(b)  i( f(c1,c2,…,cn) )  =  i( f(i(c1),i(c2),…,i(cn)) )

(c)  i( f(v1,v2,…,vn) )  =  i( f(a(v1),a(v2),…,a(vn)) )

4.4 Satisfication

M ╞ cj = ck   ⇔   i(cj) = i(ck)

M ╞ vj = vk   ⇔   a(vj) = a(vk)

M ╞ f(t1,t2,…,tn)   ⇔   i( f(t1,t2,…,tn) )

4.5 Logical operators

Logical definition over formula A. A is a Frege predicate, a test predicate, or a logical construction of these.

M ╞ Ai ∧ Aj   iff   M ╞ Ai  and  M ╞ Aj

M ╞ ¬A   iff   not M ╞ A

M ╞ Ai ∨ Aj   iff   either M ╞ Ai  or  M ╞ Aj

M ╞ Ai → Aj   ⇔   M ╞ ¬(Ai ∧ ¬Aj)

M ╞ ∀vA(v)   iff   for any v if v ∈ U  then  M ╞ A(v)[a(v)=c]

M ╞ ∃vA(v)   ⇔   M ╞ ¬∀v¬A(v)

M ╞ T(Ai,Aj)   ⇔   M ╞ A, where T focuses upon Ai and Aj in A.

5. Application

              The basic aim in this article is to contribute to understand the notion of empirical knowledge gaining. I have suggested that the traditional notions of confirmation and falsification can be somewhat modified and derived in an ordinary logical system complemented with the test rule. It is now time to show the strength of this system by defining stronger notions.

5.1 Evidence-devices

              Scientists use instruments to get information out of the natural world by quantifying properties. I emulate these instruments as sets of evidence formulas, and name the set evidence-devices E.

Let                                                              E be the set of evidence-formulas

Let                                     Ej  ∈ ℘(Ε), where ℘ is power-operator and Ej is evidence-device.

So                                                                E ├ Fi(v) ⇒___∃Ej( Ej ├ Fi(v) )

Evidence-devices are language descriptions of the instruments used by scientist. Each such device is a language expression of what the instrument has registered. If there should be any use with the hypothesis testing theory in this article, a central task would be to paraphrase the scientific everyday instruments as evidence-devices. This is a vast task.

5.2 Hypothesis-devices

              Scientists claim ideas in order to suggest natural-world views. I emulate these ideas as sets of hypothesis formulas, and name the set hypothesis-devices H.

Let                                                            H be the set of hypothesis-formulas

Let                                  Hj  ∈  ℘(Η), where ℘ is power-operator and Hj is hypothesis-device.

So                                                              H ├ Gi(u) ⇒___∃Hj( Hj ├ Gi(u) )

Hypothesis-devices would usually be associated with humans being smart enough to come up with something scientifically interesting. However, it would also be possible for machines to be hypothesis-devices.

5.3 Test-devices.

              Test-devices T(Ej,Hj) are evidence devices testing hypothesis devices. The test-devices show the proper test procedure for any hypothesis and evidence possible. This shows the strength of the theory, which would be able to guide any kind of empirical knowledge gaining activity. I call the following theorem hypothesis-testing completeness.

Def.                              Ej,Hj ├ T(Ej,Hj)    ⇔____ Ej,Hj d_∀Fi,∃Gi Τ(Fi,Gi)__∧__∀Gi,∃Fi Τ(Fi,Gi)

              The test-devices claim that every hypothesis-device can be tested by some evidence-device and every evidence-device can test some hypothesis-device. The proof is as follows. By the definition T(Ejj) is the case. Let Hj be a simple formula. Then by the definition, there is a simple formula Ej testing Hj. Let Ej be a simple formula. Then by the definition, there is a simple formula Hj tested by Ej. Let Hj be a conjunction formula. Then by the definition, there is a conjunction formula Ej testing Hj. Let Ej be a conjunction formula. Then by the definition, there is a conjunction formula Hj tested by Ej. Let Hj be an implication formula. Then by the definition, there is an implication formula Ej testing Hj. Let Ej be an implication formula. Then by the definition, there is an implication formula Hj tested by Ej. Let Hj be an exclusive disjunction formula. Then by the definition, there is an exclusive disjunction formula Ej testing Hj. Let Ej be an exclusive disjunction formula. Then by the definition, there is an exclusive disjunction formula Hj tested by Ej. Let Hj be a negation formula. Then by the definition, there is a negation formula Ej testing Hj. Let Ej be a negation formula. Then by the definition, there is a negation formula Hj tested by Ej. Let Hj be an all quantification formula. Then by the definition, there is an all quantification formula Ej testing Hj. Let Ej be an all quantification formula. Then by the definition, there is an all quantification formula Hj tested by Ej. Let Hj be an existence quantification formula. Then by the definition, there is an existence quantification formula Ej testing Hj. Let Ej be an existence quantification formula. Then by the definition, there is an existence quantification formula Hj tested by Ej. So for any Hj there is an Ej testing Hj and for any Ej there is an Hj tested by Ej.

5.4 Empirical knowledge-gaining programs

              Scientists build up theories by putting ideas and information about the world together, perhaps letting the ideas going beyond the actual information at hand. I understand this phenomenon as confirmation, in terms of section 3.6. Applying the above notion of devices to this definition will result in the following formula. Example. Let Hk be Newton mechanics. Then there is a hypothesis part Hj, of Hk, being tested by its proper evidence Ej.

Ε,Η ├ C(Ej,Hj) ⇔____Ε,Η ├ T(Ejj)  ∧  P'(Ηj|Hk)

Another knowledge-gaining part is when scientists revise their theories according to hypothesis contradiction or falsification. Section 3.7 shows this. In the device manner, it is possible to formulate.

E,H ├ F(¬Ej,Hj)

Besides confirmation and falsification, scientists use probability. Section 3.2 shows how this works with the notion of testing, and therefore how it works with confirmation and falsification.

E,H ├ P( T(Ej,Hj) | T(Ek,Hk) )

Finally, I define the strongest notion in this theory, called empirical knowledge-gaining program. T(Q,M,A) focuses upon the reasoning characteristics of at least one of above three knowledge-gaining parts. This focusing uses the same natural language principle claimed in above 2.2.3. Q is question, M is method, and A is answer. Example. Let T(Q,M,A) be T(Ej,Hj). Q is the supposition of T(Ej,Hj), M is the deduction, and A is the conclusion. I think this theorem shows the procedure of confirming any imaginable hypothesis.

6. Some brief ontological-consequential notes

6.1 Distinctiveness

              Empirical knowledge-gaining programs T(Q,M,A)_include the condition of evidence being distinct in kind from hypotheses. This condition challenges the traditional confirmation view that «hypotheses are based upon evidence and go beyond the evidence». In this view, «based upon» means referring to, and «going beyond» means that the hypothesis includes the evidence. In my view, «based upon» means that evidence tests a distinct hypothesis part, and «going beyond» means that the full hypothesis includes this hypothesis part. To me it seems that natural entities differ from those creatures observing those. Intelligent creatures make notes both by perception and understanding and forms two distinct evidence and hypothesis entities.

6.2 Similarity

              T(Q,M,A) also includes the condition of evidence being similar to hypotheses. I use indexes like Fi and Gi to denote that F and G have similar simple and the same logical form. To me it seems that there is no point in confirming or falsifying a hypothesis-claim using an evidence-claim not similar to it. The hypothetical-deductional principle, (Popper,1972) includes the problem of identifying the hypothesis part being false out of a given falsification of the hypothesis. This problem partly includes the problem of correlating evidence with hypotheses. The T(Q,M,A) similarity condition directs evidence to its proper hypothesis, avoiding this part of the hypothetical-deductional problem. However, the correspondence of evidence and hypotheses is not definite. Someone might suggest that evidence F is similar to hypothesis G, someone else might instead suggest that F' is similar to G. One such way of defining similarity for evidence and hypotheses results in one set of test-predicates. That is, one empirical knowledge-gaining program. Using another similarity defined set of test-predicates ends up with some other empirical knowledge-gaining program. These programs cannot compare properly, due to the different conventions or paradigms of similarity. Therefore, such empirical knowledge gaining programs are incommensurable. This is an application of the notion of paradigm. (Kuhn,1970) The consequence for the empirical knowledge gaining programs is that programs are true in their paradigms. A supported hypothesis is true if and only if it is confirmed in the section 3.6 formal sense, where the involved simple predicates are true in the supported paradigm. There are also other consequences of the notion of similarity. Relative verisimilitude is a relative way of approximating to truth presupposing observational nesting, using the idea that increasing theory truth-contents entails increasing observational success, predictive power. (Newton-Smith,1990) In my analysis, the natural relation of evidence and hypotheses makes an empirical request out of this thesis. This follows by its presupposition of truth-values relative a way of defining test predicates, having its set of suggested evidence and hypothesis similarities. Going on; suggestions that scientific methods are no better than other knowledge gaining methods, (Feyerabend,1975) could be interpreted such that the formulas F and G cannot fulfil the index condition suggested in Fi and Gi.

6.3 Procedure

              Evidence being distinct from, and similar to, hypotheses defines evidence matching hypotheses. Logical constructions of matching define the test procedures, as section 3.3 shows. Now, consider the raven paradox. (Hempel,1985) The premise of the paradox is «confirmation of all ravens are black» and paraphrases in my analysis as the formula T( ∀x(Fi(x) → Fj(x)) , ∀yGi((y) → Gj(y)) ), where Fi(x) is the evidence that any individual is raven and Fj(x) is the evidence that any individual is black. Gi(x) is the hypothesis that any individual is raven and Gj(x) is the hypothesis that any individual is black. The conclusion of the paradox is «confirmation of all non-black are non-ravens» and is analogously paraphrased as above T formula, but with the two arguments in logical counter position. (My analysis of confirmation also includes probability, but this is not needed to comment the Hempel paradox. The second part P' of confirmation is not used in the paradox either.) The sections 3.3 and 3.6 show that the two C formulas define as two different test procedures. 3.3 shows that in this case it is not possible to substitute the arguments in T with logically equivalent arguments, due to the non equivalent logical constructions of matching predicates defining the T predicates. Therefore, the raven paradox is explained and avoided in this theory. Going on; the Carnap concept of confirmation includes the relevance concept, (Carnap,1962) involving terminology as positive, negative, and irrelevant confirmation. The T(Q,M,A) makes this concept pointless.

              Empirical knowledge-gaining programs T(Q,M,A)_view the test part of confirmation as focusing upon evidence and hypotheses being parts of test procedures. This view is the result of stressing the meaning and relation of evidence and hypotheses. (This might be somewhat related to the Keynes idea of explicating the confirmation evidence part.) One consequence of this ontological view is that it puts new challenging questions to notions like induction, confirmation and falsification. For instance, I see it as Carnap that confirmation involves a classificatory, a comparative, and a quantitative part. With the first part I mean «evidence confirms hypothesis» T(E,H). With the second and third part I mean «this evidence confirms this hypothesis better than that evidence confirms that hypothesis» T(E,H) ∧ T(E',H') ∧ P'(H|H'), and «evidence confirms hypothesis to some degree» P( T(Ej,Hj) | T(Ek,Hk) ). However in my account, the classificatory part does neither define the comparative nor the quantitative part. Another consequence example is induction, where there might be more to it than logical and psychological issues. A third effect is to question whether the rationalistic falsification problem can be solved. To me it seems that the traditional clash between empiricists and rationalists do not hold in the aspect of testing, as the empiricist hypothesis-support view and the rational hypothesis-denial rival view seem to converge in the same principle of testing T.

              Finally, in my view empirical knowledge-gaining does not aim at theory choice, but at the cultural use of tested hypotheses. I view T(Q,M,A) as a possible template of the empirical knowledge-gaining method of hypothesis testing, complementing the traditional knowledge-gaining method of theory proof.

7. Conclusion

              Empirical knowledge gaining is traditionally viewed as being about confirmation and falsification. Those natural notions, in turn, come with suggested analyses, known to include both insights and paradoxes. I try to show that there is a third terminology about testing available. The essence of this testing terminology is to stress the ontology of confirmation and falsification and to clarify the meaning and relation of evidence and hypotheses. With this settled, the testing terminology tries to wedge itself into the traditional terminologies of confirmation and falsification. With this done, the testing terminology shows the ability to define strong theorems like empirical knowledge gaining programs. These programs are non-inductive and correlative and aim to template some logical aspect of insightful everyday scientific empirical knowledge-gaining in perhaps a less non-paradoxical way.

8. References

Carnap, R., Logical Foundations of Probability 2nd ed., 1962, University of Chicago Press.

Carnap, R., Studies in Inductive Logic and Probability vol. I, 1971, University of Chicago Press.

Feyerabend, P.K., Against Method, 1975, London: New Left Books.

Frege, G., Begriffsschrift, 1879, Halle: Verlag Louis Nebert.

Goodman, N., Fact, Fiction, and Forecast, 1955, Cambridge Mass.

Hempel, C.G., Epist., Methodology, and Phil. of Science, 1985, Repr. from Erkenntniss, Vol. 22, Nos. 1, 2 and 3.

Hume, D., Treatise on Human Nature ed. by L.A. Selby-Bigge, 1958, Clarendon Press.

Hume, D., Enquiry Concerning Human Understanding ed. by L.A. Selby-Bigge, 1927, Oxford.

Keynes, J.M., A treatise on probability 2nd ed., 1929, London and New York.

Kuhn, T.S., The Structure of Scientific Revolutions 2nd ed., 1970, University of Chicago Press.

Newton-Smith, W.H., The Rationality of Science, 1990, Routledge.

Popper, K., Objective Knowledge, 1972, London: Oxford University Press.

────────────────────

Mikael Eriksson

Karolinska Institutet

Stockholm

mikael.eriksson@ki.se