SORITES, ISSN 1135-1349

Issue #11. December 1999. Pp. 41-65.

Amounts of Vagueness, Degrees of Truth

Copyright © by SORITES and Enrique Romerales

Amounts of Vagueness, Degrees of Truth

by Enrique Romerales

The view that vagueness is a omnipresent phenomenon has, in recent times, become a sort of philosophical dogma. The sugestion is that there are not only vague words, sentences and concepts, but also properties, states of affairs and objects. Moreover, sometimes it is even claimed that every object and every state of affairs is inescapably vagueFoot note 4_1.

Objects, properties, relations, and states of affairs all belong to what is usually known as ontology (or metaphysics), and they are interrelated in such a way that the vagueness of one of them will necessarily have an impact on all the rest. For instance, if an object O is a vague one, then this will be due to the vagueness of its properties (either due to it definitely possessing a vague property, or for it being indeterminate whether it possesses a perfectly precise property). In the first case, vagueness relies upon the property (that is, upon the predicate), in the second upon the object (that is, upon the grammatical subject); in both cases the result is a vague state of affairs: the state of affairs in which it is unclear whether the object O possesses or lacks the property P (or a relation R with another object).

Let us leave for another ocasion the question whether there exist metaphysically vague objects, because of the metaphysical problems involved in the very conception of an object. In this paper I want only to discuss the semantical aspect, so I will take «object» to stand for the referent of the grammatical subject of a sentence. The question now is: are all objects vague? Let us suppose we use the following criterion for vagueness:

1) an object is vague iff it is possible to predicate some vague term of it.

Let us also suppose that we accept the standard definition of what is to be a vague term: that of which there are, or could be, borderline (i.e. doubtful) cases of application. This criterion, which seems to be assumed by many philosophers, is extremely liberal, and to my mind accords neither with normal uses of the predicate «vague» by native speakers of the English language, nor with any of our intuitions, semantical and ontological alike. With such a criterion, every object is vague indeed, because of any imaginable object whatsoever (with the possible exception of mathematics) not only is it conceivable that some vague term applies to it, but the contrary seems indeed inconceivable. For example, focus your attention on John, whose head is entirely covered with hairs. Nevertheless, surely there is some possible world in which John begins to lose his hair (may be due to a disastrous diet, with plenty of fat), upto the point at which it is proper to say of him - for instance from his thirties on- that in that possible world John is bald. Not «rather bald», or «dubiouisly bald», but plainly bald. But «bald» is a typically vague predicate, because we do not know how many hairs one has to lose to be counted as definitely bald. There are innumerable actual cases in which it is doubtful whether the predicate «bald» applies or does not apply (in fact, dermatologists have classified alopecia in six degrees, but, quite obviously, there are many cases in which it is doubtful to which degree some scalp belongs). So, our John (the one who inhabits the actual world) is to be counted as a vague object merely because it is logically possible to predicate of him a vague term like «bald». Surely this is absurd; John is a perfectly precise individual (let us suppose at the moment), and clearly not bald, so if John is to be counted among the vague objects, he will be so counted for other reasons yet to be spelled out.

The trouble here is the modal form of the first criterion. There are too many logically possible worlds, and every object from within the actual world surely also exists in another possible world at which some vague predicate or other properly applies to it. In order that vagueness becomes not a trivial phenomenon (tautologous) but a substantial fact, let us restrict our attention just to the actual world. We can try a more restricted criterion:

2) An object is vague iff in fact some vague term is predicated of it.

Now, this criterion continues to be excessively liberal. For, let's take for instance an orange, paradigmatic in all its properties. It has an orange shape, smells of orange and tastes of orange, and, most importantly, looks a splendid orange colour, not in some other possible world, but here, in the actual world. But «orange» (refering to the colour) is supposed to be a vague term, because there actually are cases in which the application of that word is doubtful (objects that are in between orange and red, for instance). Therefore, the orange just refered to is a vague object according to this second criterion because a term applies to it -although with total precision- which in fact (that is, in some other actual cases) works vaguely. But again, quite obviously, the orange refered to is not a vague object at all, nor are any of its token properties, colour included: it is a perfect and unequivocal token of an orange. So something continues to fail in the second criterion. Let's formulate a yet more restrictive version.

3) An object is vague iff there is some term whose application to it is doubtful.

This time let's take a ripe grapefruit, somewhere between orange and yellow in colour. It seems that this time we do have something straigthforwardly and actually vague. In this case, even if «orange» and «yellow» were precise colour terms, our grapefruit is a semantically vague object, because we do not know, or it is dubious, whether the color term «orange» rather than «yellow» applies to it.

But, once more, the grapefruit in question is not a vague object at all. At most what is vague is its colour, which is a property of the grapefruit, but not the grapefruit itself. Ripe or not, it is definitely a grapefruit. There are no problems of individuation (how many grapefruits are in front of us?), nor of identity (what kind of fruit is it?), nor of identification (what spatio-temporal item are we refering to?) concerning this fruit, in spite of its colour being doubtful. What stands in front of us is not a vague object, but at most a doubtful (vague) color shade of a well defined object. According to this criterion, there may well be vague properties, but this is insufficient to show that there exist semantically vague objects.

So, let's try another yet more restrictive criterion:

4) An object is vague iff it is doubtful which sortal predicate in fact applies to it.

Here it seems that we have hit the nail on the head at last. Let's get a different example, and imagine there is in front of us an object similar to a chair but slightly wider, and slightly shorter than normal, and with only one arm. Then, we don't know whether it is a chair, an armchair or a new kind of object. We don't know what to call it: it is a semantically vague object. Of course, this vagueness does not stem from our lack of adequately fine-grained perceptual discrimination: we perceive the object perfectly well, we see its color with clarity (let's suppose it to be perfectly white), its size, its shape; we know it is made out of oak and so on. The root of the problem lies in our lack of conceptual discrimination. We have only two concepts under which this object could roughly be included: chair and armchair. But neither concept does it fit exactly. Maybe it is just an armchair of which one arm is missing; maybe it is a new kind of object created by a designer with a purpose we don't know of. Surely in that case the designer will give it a name, and once the function and purpose of the object is grasped we shall have a new concept. If its function is socially useful, objects of the same kind will be reproduced, the concept will become common, and the word -say «onearmchair»- will be added to the English language.

But we are in no need of strange and artificial examples: the world of nature provide continuously lots of them. A stream has become permanently so full of water in that condition that now we don't know whether it is a stream or a river. A mountain is so eroded by wind and rain that we don't know whether it is a hill. All objects of this kind are semantically vague because it is doubtful which sortal predicate (if any of the ones we actually have) applies to them. But note, it is vague only the mountain so little that is liable to seem rather a hill; the remaining mountains are not vague objects at all. And the same applies to rivers: the cases dubious between river and stream are the exception (if they were the rule, we would have a concept and a term for them).

Now, someone may reply as follows: there is a trick in the latter move. If only mountains that in fact are in between a mountain and a hill are to be counted among the vague objects, we are not talking about semantical objects, but about physical objects, about beings. But the point at issue is whether a term like «mountain» is or is not vague. And it is indeed vague when there are doubtful cases of application of the term -it does not matter whether many or few- that is, when there exists a physical object of which we don't know, or we are not able to determine whether the term «mountain» applies to it with truth in order to say «that is a mountain». So in the former case we can blame the physical object for vagueness (for example the little mountain of King Arthur's seat, in Edinburgh) for being in between a mountain and a hill, or, what amounts to the same, we can blame the predicates «mountain» and «hill» for lacking sharp boundaries and posing many cases of dubious application. And if we are indeed concerned with the semantical question, then the vagueness relevant will be the one concerning which substantive terms (nouns) are vague. That is to say, since we agree that vagueness applies to a large extent to predicate terms, let us inspect whether it applies to subject terms to the same extent. Now it seems we are finally in touch with the semantical question. Let's formulate a criterion in terms of substantive nouns, which are genuinely linguistic entities, rather than in terms of objects:

A) A substantive term is vague iff there are possible borderline or dubiuos cases of application.

Again, because modalized this criterion is too liberal: for any substantive term we can conceive of, there is a possible world in which there are objects of which it is doubtful whether the term applies or does not apply to it. For example, «tiger» would be vague, because there is some possible world in which there are some mammals similar to our tigers, but also similar to our leopards (let's dub them «tigepards»), so that it is dubious whether they are a subspecies of tigers, or of leopards, or if they form an altogether different species (to get things yet more dubious and complicated, let's suppose that in general tigepards are able to reproduce only among themselves, but that occasionally, they can reproduce with both tigers and leopards alike, some of their offspring being fertile, some not). Well, whith this criterion surely all or almost all substantive terms are vague, because we can conceive of dubious cases of application. But, is this an adequate definition for the vagueness of a substantive term? I don't think so. Albeit «vague» is vague, we do have a concept of vagueness, and when we have a concept and its corresponding term usually there are cases in which it applies and cases in which it does not apply, or at least cases in which the differences in application are very remarkable.That is, if some terms are radically vague, some others must be precise, or at least vague in a much smaller degree. And it seems totally unfair to regard the term «tiger» vague simply because there could have been animals of which it would be difficult to decide whether they were tigers or not.

If someone is not yet convinced by the tiger example, we can provide another. Let's have take a gold ingot. Is the term «gold» vague? Of course, for the ordinary speaker of English there will be cases in which he is in doubt whether to apply or not apply the term to some object, because he is liable to mistake some other metal for gold, provided they lookFoot note 4_2 alike. Nonetheless, there is a well established scientific criterion to determine whether a given ingot (or a single atom, if you like) is or is not gold. And in a case like this it is even controversial whether there are possible worlds at which there is gold with a different number of protons and electrons from our gold. So, I will demodalize the criterion once more:

B) A substantive term is vague iff there are actual cases of borderline or doubtful application.

With this criterion it seems clear that not all terms are vague: «ballpen», «gold», «tiger», «quartz», «star» etc. are all cases of non dubious application. True, for the lay person it can be very doubtful whether a watch is or is not made of gold (particularly if its origin is dubious), but the physicist can answer the question without trouble. By the same token, a zoologist can tell whether a certain mammal is a tiger, a geologist whether a piece of mineral is quartz, and the astronomer whether a point of light in the nigth sky is a star (instead of a planet, a comet or a distant galaxy).

What I am trying to say is that most sortal terms have well defined criteria of application, so that troublesome cases for ordinary people can be definitively solved by the expert. Nevertheless, this, unfortunately for the philosopher -and fortunately for ordinary language- does not happen with every sortal. «Mountain» and «hill» are typically vague sortal terms, as also are «city» and «town». Certainly, the geographer could stipulate the border between hill and mountain as being for instance 500 m high (either over the sea level, or more plausibly over the surrounding ground's level), and the political geographer could stipulate the border between town and city as 100.000 inhabitants. Then we would have absolutely precise terms and concepts, although arbitrarily precise. Arbitrariness does not need to be a shortcoming: we are the authors of houses, villages, towns and cities; so we are entitled to define (i.e. to delimit with so much precision as desired) our concepts in that field. The real problem is that in fact we don't do that (or don't always)Foot note 4_3.

Things are different when sortals correspond to atoms, molecules, minerals or biological species. There, nature has established definitive differences which prevent the choice of sortals from being arbitrary. If I remove a single hydrogen atom from a benzene molecule, it is no longer benzene at all. «Benzene» is, then, one among many substantive terms with no borderline cases of application, and which, as a result, is not to be counted among the vague terms. I would suggest that when a sortal admits of borderline cases of application, that is, when we would have to stipulate arbitrarily a sharp cut off as the sortal limit -like between a town and a city, or between a hill and a mountain- we haven't got a true sortal, but instead a quasi-sortal. This seems to me more proper than saying that there really are vague sortals. Since «sortal» is a specifically philosophical term, I would use «substantive» for «vague sortals» and non-vague alike, and keep «sortal» just for non-vague sortals.

A barrier should not be set up between natural and artificial sortals. Mountains are natural beings, but the sortal «mountain» gives rise to problems of application in borderline cases that the artificial sortal «table» never (or almost never) gives rise to. Natural sortals are less problematic only when their referents have a clear individuation in the scala naturae. Either by being little complex entities (as an atom, a molecule, maybe a prion and certain viruses) or by being highly differentiated. For example, of a mammal, rare, silly, angry, big or whatever as it can be, to predicate «is a cat» is absolutely true (or false). And here there seems to be no place for things like the infinitely perfect cat (the cat more similar to the ideal cat), nor, conversely, for the least cattish entity possible.

Now, the opponent can reply as follows: concerning the last criterion we are confronted with a dilemma: either it is inapplicable in many cases, or its application renders many more cases as ones of vagueness that it seems at first sight. For instance, let's go back to tigers. Probably now there are no species alive similar enough to tigers in the relevant aspects and traits, consequently the use of the term «tiger» at present gives rise to no problems. But it is very likely that in the history of the evolutionary process there have been intermediate species, nowadays extinct, of which it was in fact doubtful whether they were really tigers or not. In that case, although for ordinary people «tiger» is not a vague term, it is so for the zoologist -the expert, and consequently the one who is entitled to decide in the end whether the application of a term is correct or not- and for the paleontologistFoot note 4_4. Moreover, let's suppose that we find remains of species morphologically and anatomically very similar to tigers, and that they seem to be their close ancestors in the evolutionary line. Let's dub them «?tigers». Now, suppose that the sound criterion of individuation for biological species is this: two exemplars (obviously of different sex) belong to the same species iff they are capable of reproducing among them, and also are their descendants. In that case, since ?tigers are extinct, we presumably are not able to know whether they were capable of reproducing with tigers or not. But ?tigers do have existed; so, applying our criterion, it is doubtful (we cannot know) whether the term «tiger» is vague. Now, there is nothing special about the case of the tigers. What happens in the case of so an evolved and differentiated a species like tigers happens in all other cases too, because evolution has been gradual and highly branched, and is full of nowdays extinct species (compare the problem of fixing which one of the hominids was the first human being).

Well, we could reply: possibly paleontology can establish some cases of species now extinct as doubtful concerning their assignment to a species already identified and classified, or their forming a new species. Maybe even in zoology there are some exemplars alive which are only doubtfully classified into a certain species. But it is not so in most cases. Cases of dubious classification are the exception, not the rule. And, since we have decided to regard as vague only the terms whose application is in fact sometimes doubiuos, even if there are vague terms of biological species, they are minimal.

But the critic may reply as follows. Well, paleontology is still progressing, and surely there are many fossil remains of now extinct species yet to be found or maybe unfortunately lost for ever. At any rate, these species have existed, so species terms are in all probability all vague (or almost all). Or, at best, we never can be assured that an apparently precise term won't turn out to be vague in the end. And an argument along the same lines can be offered for artefacts. From the fact that we have never heard of any object in between a chair and an armchair, you cannot infer that never and nowhere has any carpenter made up something intermediate. To put the argument in general terms: for any well defined artefact kind A, it is possible that some civilization or other has at some time made up some artefact token a» sufficiently similar to a tokens as to be doubtful whether a» is to be counted as an A or not. Then we never can be assured that apparently non vague terms like «ballpen», «chair» or «spoon» are not vague after all.

I think it is proper to reply to that line of argument in this way: a term must be counted as vague-innocent until proven guilty. This means that while you have not found a single dubious actual case of application of the term, the fact than in the distant past there could have existed doubtful cases of application is as irrelevant as the fact that there could be cases in the future (from the fact, say, that in the future someone will make something intermediate in between a car and a motorbike, it does not follow that those terms are vague now).

We might even claim that if the remains of ?tigers are discovered, this does not turn immediately the term «tiger» into a vague one, if we accept as a criterion for existence of species the fact that there be exemplars alive. Nevertheless, I concede this may sound paradoxical. Because, as we have above admited, it is the expert, the scientist, who is allowed to decide whether the application of a term to a given object (gold, water, quartz etc.) is right or wrong, and as a result it is up to him to define with precision what would otherwise be vague objects. But in this case it is the other way round. Because a term apparently not vague at all, is liable of turning out to be vague just for the zoologist. And, as we will see later, if in the former case there are good grounds to consider the scientist the one apt to resolve doubtful cases (and also relevant ones for demandig his assistance), these same grounds should apply here. For the paleontologist the problem is that in the past there has been, or might have been a dubious species. For the philosopher the problem (with the criterion in use) is that at present there are remains of which it is doubtful whether they belong to a determinate species.

But I think we could make use again of the forensic clause «innocent, until proven guilty». Thus if a paleontologist finds remains of ?tigers and is unable to determine from these reamains alone whether they belong to the species of tigers, whether they belong to an altogether different species, or whether they belong to a species in between, say, tigers and leopards (and so they belong to a doubtful or vague species), then the term «tiger» must continue to be counted among the non-vague terms. So, only when the paleontologist finds evidence that ?tigers reproduced partially with tigers (for instance, that tigers generally reproduced with ?tigers, and also their immediate descendants, but that in the third generation reproduction was difficult, and the th generation was entirely sterile) is he entitled to take as vague the term «tiger». Since this is usually very difficult to prove for paleontology, I take it that the vast majority of species terms are non-vague. A similar argument runs for artefacts. For example, ten years ago or so some vehicles were made that are in between being a car and a van. The consequence is that the terms «car» and «van» have been vague for a short period: just that period necessary to find a new term for a concept we had in advance (without the concept, without the idea, the product could not have been designed and made), viz. «carvan». Perhaps the next year a vehicle will be released which is in between a carvan and a car. Then -and only then- we will have again problems of vagueness with the term «car», problems which, presumably, will be solved in a similar way.

Now things are radically different when we focus on the remaining predicates. For there are lots of predicate terms that give rise -even if only occasionally- to problems of borderline cases. And others that give rise to this same problem very frequently, in particular certain adjectives (young, tall, short, nice) and adverbs (much, little, enough etc.).

The second part of this article is devoted to critisising the degree-of-truth approach to coping with vagueness in our languageFoot note 4_5. I will put aside many logical difficulties -pretty well brought out by others- that degree of truth theories and fuzzy logic give rise to, and I will concentrate upon philosophical problems, both semantical and ontological.

Many discussions about vague terms take for granted that there is a well established series of standard examples in which the kind of vagueness is more or less equivalent. Among the classical examples are the one of the heap, the one of baldness and the one of the colour patch. All of them have something in common: vagueness is one-dimensional. There is a sort of line, and it is unclear when the term begins to apply, and sometimes is also unclear when the term ceases to apply. In other more complex cases vagueness is multi-dimensional. For instance, «beautiful». For something to be or not to be beautiful many parameters and their relations to each other must be taken into account: shape, size, colour, appearance etc. Some of them may be extrinsic to the object. So, a modern building of steel and glass can be aesthetically of great value, but horrible in the middle of an old town. Words like «nice», «clever», «able» etc. are all multidimensionally vague. So in order to avoid difficulties, let's limit ourselves to the simplest type: one-dimensional vagueness (with the hope that, if there is any solution to the problem of vagueness in one dimension whatever, the same strategy can be used for every dimension).

A different case is the one in which we find vague terms, but in which the kind of vagueness is totally context-dependent. Viz. «enough». For a clerk a salary of $ 1,500 per month can be «enough» acceptable; for someone unemployed this will usually be much more than «enough»; for a top football player like Ronaldo it will be totally unacceptable. If we put together terms like «beautiful» and «enough» things become more and more complicated.

If we are interested just in how to cope with semantically vague terms, and if -as Frege and Dummett both think- vagueness is semantically incoherent, a single case of vagueness is enough to create the whole problem. But if we are also interested in how far vagueness is entrenched in our language and our thinking, then both the question of the amount of vague terms and the question of their degree of vagueness become relevant. For that matter, I would like to remark that much of apparent vagueness is only contextual dependence. Let's take an example with «far». The University Autonoma of Madrid is 15 km. away from Madrid (from the center at Puerta del Sol). Does the property «being 15 km away from» fall under the predicate «far»? This is almost entirely context-dependent. If we are talking about a car race that is running from Paris to Madrid, being 15 km. away from Madrid is, without any doubt, not to be far. If we are talking about a plane that is coming from San Francisco to Madrid, to be 15 km. away from Madrid, is not to be far. If we are talking about a spacecraft coming from Neptune, to be 15 km away from its destination, is not to be far. If, conversely, we are talking about whether it is fair or reasonable for a student living in Puerta del Sol to go and come back by walking every day to the University, surely it is far indeed. If we are comparing the Universidad Autonoma with the Universidad Complutense (which is in the city of Madrid and very close to the center), without any doubt, the Universidad Autonoma, by virtue of its being 15 km away from Madrid, is again far, and so on. Now, imagine we are thinking of running this distance by cycling twice a day; here it seems we have a real problem of vagueness. But, once more, partially context dependent. For a young 20 year old, 30 km by bicycle per day is a perfectly feasible distance; so, for him the University is not far. For the emeritus professor who is over his 70, surely 30 km. by bicycle is too much, so for him to cycle to the University is definitely far. But, what about the Reader who is in his forty? Cases like this are the really vague ones (and again, it could be made more precise specifying the contex: for a sportsman, 30 km a day by bicycle is no problem; if he never does sport, the distance will be unsurmountable. Solely if he does sport from time to time, there -and only there- we have a genuine doubtful case). In summary, «far» is a vague term because there are some cases of dubious application, but once the context is fixed entirely, the vague cases are a minority, fewer than usually thought of. The fact that in ordinary speech acts the context is usually well defined is what makes serious troubles with vagueness to be unusual, and that communication normally flows free from obstacles.

Nevertheless, whether minority or majority, cases of vagueness do happen even in a maximally precisified context, and pose a real problem for the philosopher. It is here that some philosophers contend that, to be able to cope with vague terms we must reject the principle of bivalence and admit that not every statement is simply true or falseFoot note 4_6. There are intrinsically doubtful statements, that is, with a non determinate truth value. Whether we interpret «indeterminate» to be a third truth value or an absence of truth value, there is a general consensus that this move is useless, for now we are faced with finding two clear cuts for the application of a vague term, instead of just one as was the case before. Now there has to be, on the one hand, a sharp cut-off between the cases in which someone, say, is «definitely tall» and the cases in which he or she is «dubiously tall»; and, on the other hand, a sharp cut-off between the cases of being «dubiously tall» and the cases of being «definitely not tall». But, as is well known, higher order vagueness makes equally difficult to draw these two new bounds than the former one, just because it is dubious when someone is «dubiously tall»Foot note 4_7. True, some philosophers have raised doubts about the existence of higher order vaguenessFoot note 4_8. But I take it to be rather obvious that it does exist, and also take it that its existence has been accepted by most philosophers involved in this topic.

Once is higher order vagueness allowed, we need more truth values in order to be able to come to terms with it. Then there will be cases in which someone is «dubiously definitely tall», «dubiously dubiuosly tall» etc. Now the question is, how many values are necessary for the full range to be covered? It seems uncontroversial than any finite number of truth values is both arbitrary and insufficient. Arbitrary because there is no compelling reason to distribute the degrees of truth or corectness for the predication of terms like «tall», young», «happy» etc. in a certain number rather than in another. Insufficient, because a dubious case can always arise which is intermediate between two sucessive degrees. Futhermore, a finite grading would involve a discontinuous application of a term where the use sets rather a continuous line. It seems, then, we are compelled to admit infinitely many degrees of truth in the application of a vague predicate, where 0 will be the absolute non application of a predicate and 1 its absolute application, leaving in between the infinite interval of real numbers. It will correspond in set theory to a logic that represents concepts as points in logical space, so that to be just on the point is to be a member of the set at degree 1 (or to possess at degree 1 the property) and as we go away from the point the degree of set membership decreases gradually up to being finally 0. My aim here is to contend that this kind of answer is both technically deficient and philosophically misguidedFoot note 4_9.

I begin by classifying one-dimensional and non-context-dependent vague terms in three categories: a) those with two blurred boundaries, b) those with a blurred boundary at one side and a sharp boundary at the other, and c) those with a blurred boundary at one side and no boundary at the other. My claim will be that the gradualist approach works with the first category, but is unnecesary, and that it does not work with the other two. The main problem is: if we are compelled to assign values between 0 and 1, either we are unable to assign coherently the 1, or we are unable to assign coherently the 0, or both.

We start with cases of category (a). Let's take the colour predicate «green». It is clear that grass, pine leaves, peas and spinach are all of them unequivocally green. But bananas and lemons frequently are rather yellow. Conversely, sea water is normally blue, although on cloudy days can be rather green. Accordingly, «green» is a predicate that applies to a waveband of the spectrum between blue and yellow. Let's suppose, for the sake of argument, that there is an exact point -even if unknowable- at which blue becomes green, and another exact point at which green ceases to be green and becomes yellow. Let's call these points a and b respectively. Now, how are we to assign the values 1 and 0? Obviously we cannot decide that a=1 and b=0, because they are equally little green. Which point owns green at degree 1? Surely the one that is just in the middle between a and b. In that case you will have two points, a and b, for zero (0), and one point for one (1), the point just in the middle of both. This seems not to be a big difficulty. You could just assign values differently, and consider for instance 0 to be the perfect possession of a property, and -1 and +1 the perfect non possession of that property. But it is more in accordance with standard usage to assign 1 to the maximal degree of property possession, and 0 to the minimal, even if there are two 0 cases in opposite directions.

The first serious problem is: is this procedure not a bit question begging? A vague concept is one which possesses borderline cases for having blurred boundaries, and we have supposed there to be an exact point at which blue becomes green and another exact point at which green becomes yellow. And this is what is questionable. But the gradualist, infinitist or finitist as it may be, needs to suppose that there is an exact point at which the predicate does apply, and an exact point at which the predicate no longer applies, only the range in between being gradualFoot note 4_10. Note that this supposition -also accepted by the epistemic theory- that there is an exact cut-off point (in this case two) where the application of the term begins or ends, to my mind is sound concerning colours.

Within the spectrum of wavelengths there is an exact point at which it becomes visible for the human eye (like there is an exact different point at which it becomes visible for dragonflies), and there also is another exact point at which it becomes invisible for the human eye. Along this range we are able to set up divides based upon qualitative differences (qualia) clearly perceptible for all normal human beings. The ordinary use in many languages has assigned seven color terms as basic -the ones of the rainbow- (putting aside black and white). Well, we can then, following this linguistic use, divide the spectrum in seven equally long wavebands and ask the physicist to establish at which wavelength each waveband starts. Let's take the green waveband. Its center point corresponds to maximum greeness or greeness to degree 1, its bounds to greeness to degree 0. In between there are as many degrees as you like. In fact, I have choosen green instead of the usual red, because green is not a primary colour, but the mix of blue and yellow. Then, surely a mix at 50% of perfect blue and perfect yellow is green to degree 1, and possibly pure blue and pure yellow is green to degree 0.

Now, the critic can respond that we have artificially defined colors by using scientific patternsFoot note 4_11, when it is use which bestows meaning upon the terms. Quite naturally, different linguistic communities, with different ways of life, would have different uses and, consequently, different meanings for different terms (that is, the semantic field of terms will not be equivalent). For instance, eskimos usually are able to distinguish up to 10 shades of white, and have names for each one. Let's accept this criticism. But we are talking about what «green» means within our linguistic community -in a different language our «green» can be polisemic, or not exist at all. In our language there is only one term for green things, and all shades are named by qualifying the basic term: «bottle green», «grasp green» etc. Only the bounds of application are dubious. Now: who should establish these bounds? Wright has insisted upon the linguistic community as being entitled to do thatFoot note 4_12. This is a matter of statistics. Chose a group of native English speaking people (better from the same country?), and show them a series of colour patch shades gradually going from green to yellow. When the consensus is broken -that is, as soon as one of the people dissents from the proposition «this patch is green» - then that shade of colour is no longer definitely green, or clearly green. When they regain consensus by refering all of them for the first time to a patch colour as «yellow», then the shade is definitely yellowFoot note 4_13.

This procedure has a number of well known difficulties. The least serious is that, if this is supposed to be a determinate number of people chosen on a certain occasion, the result will be both random and arbitrary: even the same people are liable to change their minds at a different time. But, if we refer to an «ideal linguistic community», then it is impossible to know the meaning of «green», because we have no way of finding out the verdict of such a community. It is even controversial whether the concept of an «ideal linguistic community» is a coherent one, if it has to include contradictory decisions of the same person at different times.

The more serious difficulty is that in many borderline cases the speaker would not know what to say. It might take some minutes for him to decide; maybe he retracts shortly after having said something concrete; maybe he just whispers his response without saying anything clear aloud etc.Foot note 4_14. And should the answer of someone who quickly and with resolve says «green», be scored equally with the answer of one who, after thinking carefully about it, says rather shyly and tentatively «green»? The real problem is that the actual actions of fully flesh and blood individuals, including their speech acts, have such a huge number of parameters, many of them relevant to the present case, that no statistic, detailed and complex as it may be, is able to mirror them. In summary, there will be no way of assigning precise values to many boxes in the statistics. The very statistic should have instead of three boxes (green, not green, yellow), either an innumerable ammount of boxes or boxes fuzzy defined («rather a bit more green than nearly yellow» and so on). Either alternative renders the statistic unviable.

Is it not then more sensible let the physicist to be the referee in this kind of situation, and let him decide where lies the exact bound between green and yellow? But in that case, the term «green» is no longer vague, because there no longer are non-decidable borderline cases. Thus, the cases are borderline only concerning everyday use, because of our lacking adequate perceptual discrimination, but with an spectographer we are always able to tell whether some shade is definitely green or yellow, with so much accuracy as desired. If this is correct, colour terms are not intrinsically vague, it is only usage that makes them vague. Now, let's suppose that meaning supervenes on use. In spite of this, in cases of doubtful use it is still the scientist who is responsible for fixing the correct use. Think for instance of the case of gold. Whether a given piece of metal is or is not gold is decided not by the linguistic community, but by the scientific community (the chemists, this time). Why can't this be the same in the case of colours? Normally because it is not very important whether something is definitely green or yellow, while it is extraordinarily important whether a collection of ingots are or are not gold. If in a certain case the verdict concerning a color were of upmost importance (for instance, to decide whether something is a certain gem or just imitation jewelry) we would require the assistance of the scientist (the gemologist, this time).

Now, it might be reasonable to accept that perfect green consists of a perfect mix of blue and yellow. But where along the spectrum does green end? Surely, a uniform path shade of 1% blue and 99% yellow continues to be yellow, and so shall we perceive it. Very likely the same will happen if the mix is made of 2% and 98%, and so on. Does there not continue to be, after all, a blurred boundary for ceasing to be green? If we have decided to divide the spectrum in seven bands, then the band between pure blue and pure yellow should be shared as follows: 25% at the left for blue, 25% at the right for yellow and the remaining 50% in the middle for green (this implies that colors at the end of the spectrum will have a band half wide than the rest, and that their value 1 will coincide with the point at which spectrum becomes visible; so we would have 5 equally wide bands, and two half bands at the spectrum borders).

If this analysis is correct, «green» does not have intrinsically doubtful cases of application, they are only epistemically doubtful for the perceptual abilities of the average people. But -and this is the big question- if it is after all not a vague term, what is the purpose of a gradualist analysis? The gradualist will contend that, even if it is well defined when a shade is green and when it is not, the further questions remain: how green is the green patch? up to what point is it green? And it is here where the degree of truth approach has a role to play, establishing degrees of greeness between 1 and 0. But this is, to say the least, a matter for discussion. Look upon a varied garden in winter. You have there lots of plants, with all shades and intensities of green. Does it make sense to ask which of the leaves are the most green? It seems not; the proper answer seems to be something like: «there are many different shades, but all of them fully green, just variously green». If this is right, we have no reason to accept the gradualist approach, because all shades within a certain waveband of the spectrum we have accorded to call «green» are fully green, differing only in shade.

Now it can be objected: could the trouble with vagueness and fuzzy boundaries not arise again at the level of shades? Of course it could, but it does not necessarily arise. Suppose we are interested in making a pencil case with pencils of many colours. For green we want, say, 16 pencils. Well, we divide the green band in 16 equally wide stripes, we proceed to match the central shade point of every stripe with the colour shade of one pencil, and then -if you wish- bestow upon it a name (presumably a name related to the green colour shade of some object in nature normally of this shade).

Of course, this example is intentionally simplified. In reality, every determinate colour shade admits also of differences concerning intensity: the same colour shade can be lighter or darker, depending on its mixing with white or black (and also more or less bright or pale etc.). A lemon is between green and yellow, but a lettuce (like most green vegetables) is between green and white. This shows that if color terms were vague they would in fact be multidimensionally vague: also a blue patch can be not only very close to green or to violet, it can be very close to white (like sky in a sunny summer day) or to black (like dark navy blue). So that in our case, if we wish add to the 16 shades of green for instance three intensities of each, we will need 48 pencils, each one with the name «green», the surname «grass», «pine» etc. and the epithet «light», «normal» or «dark». For if there is a way of fixing the cut-off point between green and blue and between green and yellow, a similar procedure could be used to specify the sharp cut-off between white and green and green and black.

In conclusion, in spite of all contrary appearances, color terms are not intrinsically vague. Or, to put it differently, usage makes some of their ordinary applications troublesome, but there are no objects «vaguely green». That conceded, two options remain open for us. The first is to regard as maximally green (green at degree 1) only color shades being a perfect mixture of pure blue and pure yellow (and perhaps also without any mixture of white or black), and regard all shades that progressively recede from this pattern as gradually decreasing their degree of greeness, until being green at degree 0 (and place this 0 point either at blue at degree 1 and yellow at degree 1 respectively, or rather in some other point, for instance, in a mixture of 75% green (of degree 1) and 25% either blue or yellow (also of degree 1). It seems to me that actual use of ordinary language does not justify this procedure at all. For example, according to the gradualist approach it makes perfect sense to ask: is there something which is perfectly blue? And presumably it will be very difficult to find such a thing (but think of a woman who in coming into a dress shop asked: «do you have a perfectly blue evening dress?»; surely the proper answer will be: «what do you mean by «perfectly blue»?»). But in a case like this, logical grounds justifying the gradualist approach are also lacking. At any rate, it is not necessary to tackle vagueness of those predicates, once it is acknowleged that they are not really vague.

Now let's go with cases of the second (b) type: that in which there is a blurred boundary at one side and a sharp one at the other side. Standard examples are the bald man and the heap. With 0 hairs on his head Mark is undoubtably bald, 50,000 unequally distributed hairs could be a dubious case, and to have 200,000 uniformly distributed is not to be bald at all. So, the more hairs on his head and the better they are distributed, the less bald is he. For the degree theorist approach this time we have a clear assignation of the value 1 for baldness: 0 hairs on the head. In such a case Mark is 100% bald. But where begins baldness at degree 0? Here we are supposed to have the fuzzy boundary. Maybe something around 100,000 hairs or so. But this answer is forbidden for the gradualist. He is commited to the claim: the more hairs, the less baldness. And that is the problem, for what is the maximum amount of hairs someone is able to have? We could say, as many as there is room for on the head. But, on what head? Imagine Mark having a head double sized than Jim. Mark may have 400,000 hairs on his head, Jim 100,000, both uniformly distributed. Is not the degree theorist compelled to say that Mark is less bald than Jim? But, quite plainly, neither of them is bald at all. So, maybe, degree of baldness is dependent upon one's head size (maybe also on one's hairs' size?, because the thinner the hairs, the more are able to stand on a head). So we could conclude: the head size puts the limit to baldness 0; when on a certain head there is no room for just one hair more -independently of the head size- the one who owns this head is bald at degree 0. This analysis has again two counterintuitive results. One is that, since everybody is continuously replacing his or her hairs (at about 50 to 100 hairs a day) no actual head is altogether full of hair, and so everybody is at least a bit bald. The other is that, conversely, the one who has no hairs on the top on his head, but has many hairs on the sides, is to be counted as half bald. This is not in accordance with the use of ordinary language nor with ordinary knowledge. According to both, about 10 to 20 % of male adults are bald, another 5 to 10% are becoming bald, and the rest are not bald at all. And among the ones who are definitely bald there are the ones with no hair on the top and thousands of hairs on the back of their heads.

The example of the heap is yet worse. The problem has always been posed as one of identifying the point at which a group of grains put together makes up a heap. No grains are supposed to be a heap at degree 0, and from 1 grain on, the more you add the closer the collection becomes to being a heap. This is controversial. In my view, the right answer to how many grains (are the minimum to) make up a heap is four, because, as a matter of fact, only four piled-up items are liable to form a stable three-dimensional shapeFoot note 4_15. Notice that if this answer seems rather implausible this is because we are thinking of a certain context -- a heap of sand grains, in building a house for example - where it seems bizarre to say that four grains make up a heap. But think of a heap of books; here, four can easily form a heap. Moreover, perhaps in this case three is the minimal number (if the criterion for being a heap is to be able to pile up items in a stable manner, then two books could be a heap, but I think our linguistic intuitions tell us that a heap, whatever its components, has to have more than two components).

But the truly paradoxical character of the heap case lies at the other side. 100,000 grains of sand are a heap, but 1014 are not a heap, they are a hill or maybe a mountain -it depends also on its compactness. The gradualist is able to assign with sense the property of being a heap at degree 0 to zero grains, but there is no way to assign the degree 1Foot note 4_16. Of course, he cannot admit four grains to be a heap at degree 1, because in that case 2 grains would be half a heap. The perfect heap should presumably lie somewhere between 10,000 and a million grains, perhaps. But here the gradualist is as desperate as any. So, to have a definite proper answer to offer, he is obliged to say that the greater the number of grains, the more something is a heap, so that to be a heap at degree 1 the heap has to contain an infinite number of grains. But this is absurd. And if, instead of infinite we choose a very high but physically feasible (on the Earth) number of grains, we get Everest, which cannot properly be called a heap.

Maybe it can be argued that cases like this are better served by a finitist gradualist approach. After all, the gradualist approach can boast of being able to solve the sorites paradox concerning heaps. And so it does when, as our first premise, we assume that a certain determinate number of grains does in fact form a heap:

1) 10,000 grains (piled up together) make up a heap

2) If n grains make up a heap, n-1 grains also make up a heap

..........

Then, zero grains make up a heap.

Applying here a finitist gradualist approach, we can say with sense that «17 grains make up a heap» has a degree of truth of 17/10,000, or -if instead of gradualizing truth we rather gradualize the vagueness of a concept- we can say that 9,983 grains make up a heap at 9,983/10,000 (that is: 0.9983). But the root of the trouble lies in that the first premise is totally arbitrary, just as arbitrary as the number 10,000 as paradigm of what it is to be a heap: we are entitled to regard this premise neither as the exact point at which a number of grains is already definitely a heap, nor as the exact point at which a certain entity is just in the middle of being between zero grains and a hill. The problem for any gradualist, finitist or infinitist, is that he continues to need an exact number of grains that turn a lot of grains into being a heap at degree 1 (the finitist may choose 10,000 and the infinitist the infinite number: both are troublesome, although for different reasons). So we are not a single step ahead.

Now we are going to take an example of the third (c) case (recall: a fuzzy boundary at one side, no limit at the other). Let's our example to be «tall». In this case, as in many others, much of its vagueness is context-dependent. Viz. among Pygmies, a man 1.50 m. tall, is indeed tall, but among the Masai he is not tall at all. If we are talking about basketball, a player of 2.04 m. is normal, but among the population of his city surely he is tall. As accepted, not every case of vagueness is context-dependent: when talking about basketball it is unclear whether a player of 2.08 m. is tall or not: possibly he can play as 4, as 5 or as 3, according to circumstances.

Now, suppose the context is maximally wide: mundial population in toto. We are talking about people in general to determine when they are tall. Well, the gradualist infinitist analysis seems to commit you to saying that «Peter is short» is perfectly true only if Peter measures 0 cm. And conversely, «John is tall» is absolutely true only if John measures infinite meters. Both cases alike are impossible, so to predicate truth is impossible: that is, no predication applied to finite beings is absolutely true.

Nevertheless, if we take a predicate like «tall», even if we are unable to determine exactly when a certain person is tall, to say «John is tall» when John is 2 meters tall, seems absolutely true, and to say «Peter is tall» when Peter is 1.50 meters tall seems absolutely false. That is, from the case that we are unable to precisify the application of a predicate to a subject in doubtful cases (cases of fuzzy or blurred boundaries), even allowing the fact (if it is a fact) that every concept or predicate would have some cases of dubious applicability, it does not follow that there do not exist any precise cases, in which the result of applying that same predicate to a subject results in a statement uncontroversially true or uncontroversially false simpliciter.

The infinitist approach is more attractive only when we are concerned with predications on indeterminate subjects. So, to say of something that it is «big» seems to be «more true» the bigger the thing (a molecule more than an atom, a cluster of galaxies more than a single galaxy etc.), where big «in all truth» will be solely that which is absolutely infinite in size, and big at degree zero a (geometrical) point in space. The same goes for predicates like «heavy», «large» etc. But when predication is about a particular subject, like for instance a person, there is no infinite margin of application, in spite of it always being possible for someone to be a bit taller than the one at present who is most tall. How to fix, then, the point at which tallness of a human person is 1? Let's survey quickly three possibilities.

A) that «tall» at degree 1 means «what in fact measures the tallest human being now alive»

Let's suppose it is Robert Robertson, and that he is 2.70 m tall. This move has very inplausible consequences.

1) That people 2 m. tall are not very tall, because they are far away from Robert Robertson tallness. Naturally, how tall they are depends on where we fix tallness at degree 0.

1.1.) If we establish tallness at degree 0 in people being 0 cm. tall, then, people being 2 m. tall are rather tall indeed, because they are closer to Robert's 2.70 m than to 0 cm. But this has the dramatically bizarre consequence that people 1.30 m tall have tallness at degree 0.5 (they are half tall) just for being in the middle point in between Robert's height and 0 cm. And this is not to to mention the yet weirder consequence that we should postulate people of 15 or 20 cm to be qualified as extraordinarily little tall (perhaps: to be describable as tall, but only a very tiny bit tall).

1.2.) But if we wish to fix tallness at degree 0 in any other point, then we get two problems instead one: a) how to fix that point in a non-arbitrary way (this has no solution); b) suppose the point is fixed at 1.90 m. Then people who are 2m tall are not very tall. Even Kareem Abdul Yabbar is not perfectly or clearly tall, because he is only 2.17 m tall, and, as a result, he is closer to 1.90 (the point where anyone begins to be tall) than to 2.70, Robert Robertson's tall -where lies tallness at degree 1.

2) This option has a even more counterintuitive consequence. Let's suppose that the second tallest human in the world is Thomas Thomason, who is 2.30 m tall. That is, just in the middle between 1.90 and 2.70. And suppose also that Robert Robertson hurts his head in getting out of a lift, with fatal consequences. But then Thomas, without any effort and without be aware of anything has passed from being half tall to be perfectly tall; that is, the predicate «tall» no longer applies at degree 0.5 to him, but at degree 1. But all this seems ridiculous. Under any standard, Thomas is a person determinately, definitely and clearly tall, quite independently of the continuous fluctuations in the rest of the world population and their tallness, basketball players included.

B) That tall at degree 1 means «the tallest human being of all times» (up to now).

The first result of this move is merely epistemical: we don't know who, if any, of the present day humans is tall at degree 1. There is an exact point for tallness of human beings, but it is unknowable to us. This point is liable to vary, and probably will vary, in time. Now, almost certainly, this point must be placed at some point between 2.50 m and 3 m, and very likely closer to the first than to the second. Let us suppose that as a matter of fact the human being tallest in the history has measured 2.70 m. Then, except for the last of the former problems (the one posed by the passing away of giant Robert Robertson) all the other problems of the former analysis remain here in the same way.

C) That tall at degree 1 means «the tallest possible human being».

This is the assignment that fits best with the infinitist approach, and for which a degree theorist must have been waiting for some time. But then the assignment of tallness at degree 1 is left totally undetermined, because we don't know how tall a human being could possibly be, either physically or conceptually. Should a «human» 500 m tall be counted as human? For presumably such huge changes in size will have an impact on his longevity, strength, intelligence (just imagine his brain size) and so on. Once more, the only coherent way out is to assign value 1 to someone measuring infinite meters. But, quite obviously, this is as absurd as regarding only people of 0 cm. tall as tall at degree 0.

The mistake seems to lie in treating adjectival terms like «tall» («young», «thin» etc.) which in ordinary language either does apply to an object or does not apply (or we are in doubt whether it does or does not) as if they were logically comparative terms: «taller», «younger», «thinner» etc. So the error lies in thinking that if «x is more F than y» is true, then «x is F» cannot be totally true. But if the notion of degree of truth is spelled out in comparative terms, then the occurrence of degrees of truth between perfect truth and perfect falsity in no way implies the occurrence of vagueness. Something can be clearly true or clearly false without being so in the infinitist senseFoot note 4_17.

Curiously enough, perhaps the infinitist gradualist analysis may work more properly with some sortal predicates. Take the example of «table»Foot note 4_18. Let's suppose that table means: «perfectly horizontal plank supported by legs». Now pose the problem of the horizontalness of the plank (if we posed the problem of maximal and minimal tallness of its legs, the infinitist would again be helpless). Surely no actual table is perfectly horizontal, although most of them are very close to being. Well, concerning the horizontalness of the (plank of the) table, this time we have at least an intelligible and coherent way of appliying the values 0 and 1. 1 When the table is perfectly horizontal, 0 when it is perfectly vertical (it is 90° over the horizontal). In this case we have values fixed in a manner exact, clear and non-arbitrary, and also in agreement with our intuitions. Or so it seems. But this is only an appearance, for according to this, four legs supporting a plank with an inclination of 45° is a table at degree 0.5. But such an object is not «somewhat a table» or a «half table»: is not a table at all, by the simple reason of being unable to fulfill the function tables usually have: be usable to eat, to study, to write or to play a chess game.

When, then, does a table cease to be a table because of the excessive inclination of its plank? My answer is, it depends upon the kind of table (once more we have context-dependence). The normal table, for instance a table suited to eat, ceases to be a table so suited when its inclination results, for instance, in the spillage of soup from a standard soup dish placed on the table (let's take the «standard dish soup» to be the average dish presently on sale; and the standard level for a soup dish to be «full» of soup to be the maximal amount of soup for the plate to be moved by ordinary people without spilling its liquid- maybe 85% of its volume, or so). Now if we are talking about a table for study (a desk), very likely the inclination allowed is higher (in fact some desks are inclined). Up to what point is it permissible? Just up to the point at which a sheet of paper or a book slides down without being touched (let's suppose 20°). The key is that it is the object's function that puts a limit to the margins of variability of its properties.

At this point the gradualist can take advantage of my manouevre and reply: O.K. if for any given putative vague term we are able in the end to set a point where the term no longer applies, why not consider this point the degree 0 of the application of the term? For instance, concerning the desk, why not say that with an inclination of 21° a plank with legs is a desk at degree 0 and with an inclination of zero degrees is a desk at degree 1? There are three reasons to the contrary: a) because once more we get counterintuitive linguistic and semantical implications (viz. that a plank with an inclination of 10° is a desk at degree 0.5; but many desks are made up with just that inclination!). b) Because degree theorists always tend to think that their procedure is able to solve once and for all every kind of case susceptible to sorites. c) Because many-valued finitist logics, infinitist logics, and fuzzy logics all of them are faced with insurmontable logical difficulties which are lacking in approaches attached to classical logic (like the epistemic theory, to name one).

Nevertheless, there is indeed a sphere, the one of adjectives involving perfections, at which the infinitist approach seems to work at its best. Let's take «wise». Here it seems sensible to say that one who knows absolutely nothing, who does not know the truth value of any proposition (or statement, or assertive sentence) at all, is «wise» at degree 0. And someone who knows everything, who knows the truth value of every proposition, is «wise» at degree 1 (if there exists non-propositional knowledge, a «knowing how» different in nature from propositional knowledge, then one will be «wise» at degree 1 who also possesses this practical knowledge at degree 1). The same runs for «good», «powerful», intelligent» etc. In short, in all kind of terms called by the tradition «pure perfections» is where the gradualist approach scores at its best. Plainly, these predicates do apply at degree 1 only to God. That is not a problem (at worst there will be no instances of such predicates at degree 1). The real problem is that following this conception of meaning, Aristotle and Einstein had an intelligence near to zero, for compared with a being capable of posing and solving any decidable question, their minds were certainly poor. But if, as accepted above, meaning supervenes on use, the former approach clearly is not in consonance with actual use of these terms in English. Ordinary language applies «intelligent» or «wise» to humans and Aristotle or Einstein are paradigms of intelligence, like Socrates or Buddha are paradigms of wisdom, clearly closer to degree 1 than to 0. The trouble, again, is the same as before: we may assign the 0 (to the baby just born who knows nothing), but, how to assign 1 to something less than infinite? The troubles we found with «tall» recur here with «wise», «good» etc.

I see the gradualist approach as a sort of ontological argument multiplied and generalized for various categories. For objects simpliciter, there is a perfect table (cat, pine, car...), and all the rest are mere approximations; for qualitative properties, there is perfect wisdom (goodness, intelligence...), and all the rest are mere approximations; for quantitative properties: there is a perfect/maximal tallness (wideness, size, wieght...), and all the rest are mere approximations. But nothing of this is either useful for or relevant to accounting for vague terms in ordinary language.

Now, let's imagine the gradualist replying like this: you are compelling me to accept that 1 must always take an infinite value, and so, as a result, never applies to actual cases. But what is infinite is only the interval of real numbers between any two points on the straight line. Just call «0» the point at which the predicate no longer applies and «1» the point at which it absolutely applies. Then, taking again the example of the heap (possibly the most typical one), zero grains make up a heap at degree 0; and there is a finite number of grains k, at which at last we have a heap at degree 1. And that's it!

But this gives rise to a dilemma; k will be either a determinate number or an indeterminate number. If k is intrinsically indeterminate, there is no way to assign degree 1 of heapness; and this amounts to saying: concerning heaps we don't know and never will be able to find out whether there is just a single true heap at all (i.e. at degree 1). But it is this that is highly implausible: we do know not only that there are some heaps that are real, genuine and true heaps (of sand, of salt, of books) but that there are thousands of them!

And if, on the other hand, k is a determinate number, say 10,000 (for a heap of sand, for instance), then a heap of 15,000 grains is no more a heap than the one of 10,000, because it is not possible to exceed the value of 1 (and, incontestably, 15,000 grains do not make a hill). But this latter point is in total consonance with the epistemic theory: there is an exact point (to my mind contextually-dependent, and that for sand, sugar, boulders etc. is four, and in the case of bricks, books etc. is three) at which a number of items piled up makes a heap, and it is as much of a heap as one of 10,000 items or one of 12,000 or of 15,000. The difference with the gradualist is that for him, when we focus on the upper side he has to draw different consequences: «5,000 grains make up a heap» has a truth value of 0.5, «2 grains make up a heap» has a truth value of 2/10,000. In such a case 1) the gradualist applies a different criterion to the cases placed on the upper side of degree 1 (10,000 grains), than to the cases placed on the lower side. 2) It is as implausible to say that a heap of 15,000 grains is more of a heap than one of only 10,000, as it is to say that a heap of one grain is less of a heap that one of two grains: the former two are equally a heap; the later two are equally a non-heap.

I wish to conclude by regarding the same matter from the viewpoint of set theory. The spatial representation of a set corresponding to a vague predicate is that of a circle with fuzzy boundaries. The more vague the concept, the more fuzzy or blurred the boundaries. Having a fuzzy boundary, there is no way of assigning the center of the circle to a determinate point (for the same reason as it is senseless to try to assign the value 1 to a single point). But, in having boundaries, blurred as they may be, there are cases unequivocally clear of set membership and (more importantly) of non set membership. For as vague a concept can be, it cannot occupy all logical space. If a concept extended to all logical space it would be infinitely vague (and so useless). But just this is the representation that should follow from fuzzy logic.

Mark Sainsbury has considered regarding vague concepts not as circles with fuzzy boundaries, but as poles of attractionFoot note 4_19. This conception is tempting. In that case there is a single assignation of 1: just the pole of attraction. And as we go away from it attraction decreases and tends to zero. But, and this is essential, it never becomes zero. For logical attraction of concepts within logical space is like the universal law of gravitation of masses in physical space: it spreads indefinitely towards the borders of the universe. The consequence of this analysis is that, for any vague term x, absolutely everything is x to some -even if minute- extent, because the pole x exerts its attraction all over the logical space. But it seems clearly false that everything (my umbrella, England, the least prime number and my uncle Sarah) be at a higher or lower degree tall, big, happy, beautiful, red etc. And if, as it is the case for many infinitist degree theorists, we think that all terms are in the end vague, so much the worse, because then everything is to some extent anything.

In summary, if by «object» we mean what ordinary language takes it to be (and not what the metaphysician determines as the fundamental components of reality or as genuine individuals) then there are not so many vague objects as it is usual these days to claim (hard luck for the librarian if for most books it were doubtful, not just under what category they should be classified, but whether they are books or not!). Concerning words, certainly there are many vague words, although maybe not that many. And third, and fundamentally, semantical problems raised by such words are not to be solved with gradualist approaches, neither finitist nor infinitist, nor in terms of fuzzy logic. On the contrary, these approaches produce really weird results. I think problems of vagueness should be tackled case by case, first, restricting maximally the context; second, artificially stipulating a sharp boundary when we are dealing with artifacts, for having created the object we are entitled to create the concept with so much sharpness as desired (the maximal, for logical ends); third, in natural cases nature itself has established in great measure its own sharp boundaries (think of atoms, molecules or minerals); fourth, where troubles with vagueness yet remain, this may be due to ignorance of our own (perhaps avoidable, perhaps not), maybe to a lack of conceptual precision (normally due not to intellectual lazyness, but to a non-necessity of precision, or even to a necessity of looseness). I guess that in this, as in many other philosophical questions (like «do theological statements have sense»?) there is no single answer valid for all cases alike, but we instead should proceed step by step.

References

Dummett, M. (1975): «Wang's Paradox». Synthese, 30; 301-24 [in Kefee, 1996; 99-118].

Edgington, D. (1996): «Vagueness by Degrees». In Kefee, 1996; 294-316.

Evans, G. & McDowell, J. (eds) (1977): Truth and Meaning: Essays in Semantics. Oxford, Clarendon.

Fine, K. (1975) «Vagueness, Truth and Logic». Synthese, 30; 265-300 [in Kefee, 1996; 119-50].

Hart, W.D. (1991-2): «Hat-Tricks and Heaps». Philosophical Studies (Dublin), 33; 1-24.

Horgan, T. (ed) (1995): Vagueness. Spindel Conference, 1994. The Southern Journal of Philosophy, Vol. XXXIII, Supplement.

Kefee, R. & Smith, P. (e ds.) (1996): Vagueness: a Reader. Cambridge, Mass. The MIT press.

Machina, K. (1972): «Vague Predicates». American Philosophical Quarterly, 9; 225-33.

Machina, K. (1976): «Truth, Belief and Vagueness». Journal of Philosophical Logic, 5; 47-78 [in Kefee, 1996; 174-203].

McGee, V. & McLaughlin, B. (1995): «Distinctions Without a Difference?». In Horgan, 1995; 203-52.

Morgan, C.G. & Pelletier, F.J. (1977): «Some Notes concerning Fuzzy Logics». Linguistics and Philosophy; 79-97.

Peña, L. (1996): «Grados, franjas y líneas de demarcación». Revista de filosofía, IX; 121-50.

Sainsbury, M. (1990): «Concepts without Boundaries». In Kefee, 1996; 251-64.

Sainsbury, M. (1995): Paradoxes. Cambridge, Cambridge Uni. Press.

Sainsbury, M. (1995b): «Why the World Cannot Be Vague». In Horgan, 1995; 63-82.

Tomberlin, J.E. (Ed.) (1994): Philosophical Perspectives 8: Logic and Language. Atascadero, Ca, Ridgeview.

Tye, M. (1994): «Sorites Paradoxes and the Semantics of Vagueness». InTomberlin, 1994; 189-201 (in Kefee, 1994; 281-93).

Williamson, T. (1990): Identity and Discrimination. Oxford, Blackwell.

Williamson, T. (1994): Vagueness. London, Routledge.

Williamson, T. (1999): «On the Structure of Higher Order Vagueness». Mind, 127-44.

Wright, C. (1976): «Language-Mastery and the Sorites Paradox», in Evans & McDowell, 1977.

Wright, C. (1987a): «Further Reflections on the Sorites Paradox». Philosophical Topics, 15; 227-90 [in Kefee, 1996; 204-50].

Wright, C. (1992): «Is Higher Order Vagueness Coherent?». Analysis, 52; 129-39.

Enrique Romerales

Universidad Autónoma de Madrid

<Enrique.Romerales@uam.es>