Lorenzo Peña
  1. «Phonology»
  2. «Nothing»
  3. & «Dialectics and Inconsistency»

Handbook of Metaphysics and Ontology
ed. by H. Burkhardt & Barry Smith
Munich: Philosophia Verlag, 1991
[pp. 703-6, 619-21 & 216-8 (resp).]
ISBN 3-88405-080X


by Lorenzo Peña

An outstanding characteristic of human «natural» language is the linearity of its messages, which is nothing else but the fact that contrasts, i.e. relationships among elements found together within one message, are displayed along one dimension only. Even though such linearity is far from complete, it has impressed many students of language for centuries, characterizing as it does both the first and second articulation, i.e. not only the relationship among meaningful elements but also that among non-meaningful but distinctive ones. The latter are the phonemes. For thousands of years students of language have been aware of the existence of phonemes, i.e. minimal segments within the spoken message whose presence is relevant for distinguishing one message from a different one with another meaning even though the phonemes themselves lack any meaning whatsoever. (An anticipation of a modern phonological treatment is to be found in the work of King Sejong of Korea (reigned 1418-50), the founder of the Korean featural script Han'gl; see Sampson, pp. 120ff.)

The main difficulty concerning the existence and nature of phonemes is that each of them underlies a great many different phonetic realizations. Such a phonetic variation depends on a number of factors. There are individual, free and contextually conditioned variations, whether accountable for in terms of phonetic influence of neighbouring sounds or not. Even though for thousands of years many people have known about the existence of phonemes in-spite of such variations, 19th century linguists focused on the phonetic realizations themselves.

The Russian linguist Jan Baudoin de Courtenay (1845-1925) was one of the first to anticipate the modern notion of phoneme, developed in the structuralist movement initiated in 1916 with the publication of the Cours de linguistique générale by Ferdinand de Saussure (1857-1913). That book does not, however, reach the stage of a clear acknowledgment of the phonemes. The main developments in the conception of the phonemes were attained in the Prague school during the 1919-39 period (esp. by Nikolai Sergeievich Trubetzkoy, 1890-1938), in the American distributionalism initiated by Leonard Bloomfield (1887-1949), and in the French functionalist school headed by André Martinet (1908-), with three non-mainstream tendencies represented by the British structuralist linguist John R. Firth (1890-1960), the «glossematics» school of Copenhagen, started by Louis Hjelmslev (1899-1965), and the generative phonology developed by Morris Halle (1923-) and Noam Chomsky (1928-).

The American distributionalist school insists on a physicalist and set-theoretic view of the phonemes, as mutually disjoint classes of sounds. Glossematics regards phonemes as purely abstract entities having nothing to do with phonetics; according to such a view there is nothing to a phoneme but what serves to make it different from other phonemes, regardless of whether they are realized phonetically, graphically or through gestures. Martinet has a purely functionalist view of phonemes, but one which does not dispose of phonetic realization: he regards phonemes as entities whose reality is purely relational -- distinctive (that is to say, such that a phoneme is individuated by that which differentiates it from other phonemes) -- but which are characterized in phonetic terms; he rejects the disjointness principle the distributionalists cleave to; he develops Trubetzkoy's ideas on neutralization (the process by which in certain environments the difference between two or more phonemes is lost, a process consisting in the fact that in those contexts certain distinctive features serving to differentiate those phonemes are no longer relevant: for instance in word-final positions in German voiced /d/ is pronounced like voiceless /t/, the feature voicedness lacking relevance in that context) via the notion of the archiphoneme, which would be a phonemic entity occurring in those contexts and comprising all realizations of any of those phonemes; that is to say, in word-final positions in German there would be neither /t/ not /d/, but the archiphoneme /T/ instead; but of course such an analysis is by no means the only possible one. Notice that a distinctive feature is any phonetic property relevant for differentiating at least two phonemes from one another. All those schools insist on positing for each language one list of phonemes only, whereas Firth's polysystemic approach maintains that for different phonological contexts there are different lists of alternative phonemes -- which avoids resorting to the notions of neutralization and archiphoneme. The polysystemic approach, rigorous though it doubtless is, has been generally rejected owing to its enormous complexity and perhaps also to some arbitrariness in drawing the inventory of contexts which determine respective phonemic systems.

While all the aforementioned schools take phonemes to be the basic units, generative phonology bestows priority on the distinctive features instead, and regards phonemes as mere classes of distinctive features. Moreover generative phonologists, besides being about the only ones to conceive of distinctive features as universal, generally regard them as necessarily binary -- each feature being characterized as a phonetic property, which is either downright absent or else fully present. For instance, the English phoneme /i/ is characterized as: [+ syllabic], [- consonantal], [+ sonorant], [+ high], [- low], [+ voiced], [- tense], [- round], [+ front], [- back], [- nasal], [- long], etc. -- although such a characterization contains a lot of redundancy, since not all those specifications are independent. (Defining any of those features goes beyond the scope of this paper.) Another peculiarity of generative phonology is that it posits different levels, with one deep level at which there may be phonemes having features which are not manifested at all at the surface level -- an ordered set of rules turning the deep input into the surface output.

Generative phonology has gained widespread acceptance even outside the English-speaking world. However, a great many linguists have qualms about the existence of deep levels far away from surface realizations and even more about the psychological reality of such a deep level or the rules governing the generation of the surface output. (When once, however, one takes issue with Chomsky's assertion of that psychological reality, the whole nature of the generative process becomes very dubious.) There is scarcity of empirical evidence in support of such posited entities, and even abundance of indications pointing to total lack of awareness of their existence on the part of naive speakers. Moreover, the universalist view of distinctive features can hardly be reconciled with many of the empirical data, while the strictly transitionless binarist principle (the stipulation that all phonological phenomena are to be accounted for in terms of the presence or absence of different properties, with no property being allowed to come in degrees) has been argued to run counter to the continuous, gradual nature of the physiological and psychological processes involved. Furthermore, distinctive features are likely to be regarded as somehow less present than the phonemes themselves in the consciousness of naive speakers. In fact what most commonly differentiates two phonemes is not so much one or several definite distinctive features as a fuzzy cluster thereof.

Thus, e.g., it can be argued that what sets English /p/ apart from /b/ is not just its voicelessness, or its aspiration, which is not realized in certain contexts, but a fuzzy cluster of the fuzzy features of being fortis (as against lenis), voiceless, aspirated, all three of which vary in degree according to context and depending also on individual or other parameters. /p, b/ are characterized in English as being bilabial plosives, but in fact they are distinguished from other phonemes by a fuzzy cluster of features, plosiveness and bilabialness varying in degree, /p, b/ being sometimes realized as either non-plosives or non-bilabial (e.g. in `hopeful' or `subversive').

Furthermore, the choice of distinctive features in generative phonology can be regarded as somehow ad hoc, with most features being described in articulatory terms (i.e. terms applying to anatomic or physiologic properties pertaining to the utterance of linguistic messages) while others are acoustic. Sometimes a feature raises the suspicion of having been invented in order to complete the binary framework.

Some of those misgivings can probably be dispelled, although they raise important methodological issues. However the study of phonemic structures is likely to have much to gain from a gradualistic approach. In fact there seem to be lots of borderline cases, such as sounds which up to a point are allophones of (i.e. belong to) some phoneme but to some extent are allophones of a different phoneme; or sounds whose phonemehood is far from complete, whether in some particular contexts or generally; or clusters of sounds which while to some extent constituting one phoneme do not reach the same level of unity as other sounds do (the English affricate pronunciation of `ch' e.g., or diphthongs such as that in `how'). Through a gradualistic treatment -- according to which so-called clear-cut situations would be just limit cases -- synchronic phonology could tally with the diachronic study in a simpler way than is customary. It is too early, though, to assess the real merit of a gradualistic approach in phonology. (In this connection, an obstacle to be overcome is a widespread adherence to classical logic, which tends to reduce all yes/no questions to alternatives between `completely' and `not at all', whereas there are some non-classical logics which, while keeping the excluded middle principle, «p or not p», and even the strong version «p or not-p at all", drop what can be termed the classical or over-strong excluded middle, namely «Either it is completely the case that p, or else it is not the case that p at all"; classical logicians are prone to view this schema as only stylistically different from «p or not p".)



by Lorenzo Peña

Since ancient times philosophers have wondered about the meaning of such negative pronouns and adverbs as `nothing', `nobody', `never'. Apparently, they are nothing else but devices for encapsulating a negation plus an indefinite particle (resp. `something', `somebody', `ever'). But, what about negation itself? What is meant by `not'? Even though `nothing' is short for `not... anything', the problem remains of finding out what it is that thereby modifies the existential quantifier `anything' or the entire «proposition» it introduces. Now, the meaning of negation in general being hard to elucidate, the difficulty becomes all the more serious precisely when negation attaches itself nor to this or that property in particular -- with the resulting phrase standing for the given property's complement, the `not' being thus taken to be simply syncategorematic -- but to «anything» in general, or to «existence». For, granted that there are properties such as not-being-a-dog, what can be meant by `not being anything' or `failing to exist'? Furthermore, the fact is that the negative pronouns and adverbs can be nominalized. Certain of those nominalizations can be paraphrased through quasi-synonyms such as `not-being' or `inexistence'.

Parmenides warned that only being could be thought about, while non-being was both unthinkable and unsayable. Plato in the Sophist showed that any such contention contradicts itself: in fact Non-being also exists, but rather than being something thoroughly or wholly opposed to being, it is just other-than-being: it negates Being in a non-absolute way. S. Augustine tried to explain both creation and human fall by resorting to some reification of «Nothingness», while at the same time wanted to believe that by so doing he had disposed of any ontological commitment to any negative principle, the principle being nothing.

After other Medieval thinkers debating the ontological status of Nothingness, in the 13th. c. the Cathar philosopher Bartholomeus de Carcassonna brought up the problem anew by claiming that Evil is the origin of Nothingness which in turn is the stuff of deprivation and imperfection in the world. Bartholomeus's critics alleged that `nihil' cannot be construed in such a way, since e.g. when Jesus is fasting he is not thereby eating Nothingness. Yet, Bartholomeus had not contended that every occurrence of `nihil' was to be construed as standing for Nothingness. (On such a controversy see (Nelli, 1978).)

The most outstanding continuator of that Platonistic tradition in the Renaissance was Nicholas of Cusa (1401-64), whose philosophy hinges upon asserting the coincidence of opposites in God. In his early Docta ignorantia (1440: see (Cusa, 1964), vol. 1, p. 252) Cusanus regards God as both maximum and minimum and yet beyond such determinations, in such a way that he is closer to nothingness than to being something (magis accedere ad nihil quam ad aliquid: here there is no possible paraphrasing away `nihil' as `non... quiddam', i.e. `not... anything'), since being something is being something definite, which -- to the extent that such is the case -- is ruled out by God's infiniteness. Cusa's last writings (e.g. «De li non aliud» and «De uenatione sapientiae», both of 1462) somewhat reshape the coincidence of opposites, by emphasizing that as they are in God opposites are free from mutual opposition. Nicholas now stresses the reality of nothingness (at that stage usually denoted by the expression `ipsum nihil', which openly defies the attempts to eliminate `nihil' through paraphrase) but in a way places it below God: God in itself is now conceived primarily as Not-Other or Possest (that for which to be able is to be) whereas Nothingness is viewed as the root of passive possibility (posse fieri): see op. cit., vol. 1, pp. 66, 138, vol. 2, pp. 468-70. Nothingness seems to be instead (vol. 1, p. 178) God-as-towards-creatable-things, so much so that even those things' mere possibility is created from God's own nothingness.

There is also a different traditional line concerning usage of the words `nothing' and `non-being', the one stemming from some remarks by Aristotle (Categories 10, 13b15-19) according to which if (and only if) a term is denotationless, all affirmative sentences it enters into are false; whence this principle follows, that a non-being is (or, better, would be) what lacks (or would lack) any and every property. That principle -- which in the Aristotelian corpus exists alongside a different assessment of such sentences (see a detailed discuss in (Peña, 1985), pp. 69ff) -- was bequeathed to Scholastic masters. Thus the Spanish Jesuit thinker Francisco Suárez (1548-1617) in his Metaphysicaæ Disputationes maintains the principle in several ways; his Disputatio 54, devoted to the ens rationis, claims (s.5, n.16): `Aristotle says that this sentence is true, non ens esse non ens seu nihil, since, if it is a non-being, it is not a man or a horse or an anything like that...'

The principle was then handed down to 17th. c. philosophers. Spinoza (1632-1677) received it eagerly and did on its base argue for some of his own boldest claims. Thus prop. # I 9 of his Ethics (the more reality or being a thing has, the more attributes it possesses) is a generalized version of the principle. Within the Spinozian system that proposition spells trouble, since, by prop. # II 7, the order and the connection among things are the same as those among ideas; hence, prop. # II 33 will conclude that there is nothing positive in any idea making it false: falseness is just lack of knowledge (prop. # II 35) and all the Aristotelian tradition has always regarded lacks as nonexistent (see again Suarez's Disputatio 54, s.5). Spinoza's way out fell back on reduplicative clauses -- as it is in God, any idea is true, but not always as it is in us (see prop. # II 36, prop. # IV 1).

Leibniz, too, (1646-1716) inherited the above-mentioned Aristotelian principle. In his «General Investigations» (Leibniz, 1982), p. 2, he says: `Not-being is what is merely privative, that is to say what lacks everything, i.e. not-Y, which means not-A, not-B, not-C, etc. That is what people mean by saying nihili nullas esse proprietates'. (On the general significance of those remarks, see (Burkhardt, 1980), pp. 102-4.) However Leibniz is also lead by his logical reflections to a quite different approach, namely that whenever a term, `A', denotes no possible being, `A est B' is true. Since Leibniz is confident that no possible thing is B and not-B, the sentence `Nihil est B non-B' will then be true, both taken in the sense of `There is no thing being at the same time B and not B' and in the sense of `[What is] nothing is [what would be] both B and not-B', i.e. a non-being would be what would have mutually contradictory properties. (See (Couturat, 1901), p. 348, and (Peña, 1990).) The latter approach is bound to clash with Leibniz's cleaving to the syllogistic law of subalternation (according to which, if it is [generally] true that A est B, then there is some entity both A and B: see op. cit., §154, p. 114, and Schupp's commentary, ibid., p. 161-3). In any case, Leibniz, as well as almost all 17th c. thinkers, held on to the Aristotelian tradition which rejects any reality whatsoever liable to ever be denoted by `nothing'. `Nothing' is just `Not... anything' in whatever context.

The opposite line (the one rooted in the Platonistic tradition) is taken by Hegel, who, at the beginning of his Logic developed the dialectics of Being and Nothingness, by arguing that Being as such contains neither being-this nor being-that, and so it equates Nothing, a purely negative concept. More recently Heidegger in several essays has contended that Nothingness is given to us through anguish, thus evincing a reality of sorts which cannot be understood within the framework of logical thinking. Carnap has criticized such a stand, pointing out that it stems from a purely syntactic mistake, namely failing to realize that `nothing' is no noun phrase proper. But even if some of Heidegger's remarks can be easily disposed of by explaining away troubling occurrences of `nothing' in natural language, that does not show that there is no problem at all. What is it that allows us to nominalize `nothing' in apparently reasonable arguments (e.g. this one: `Should there be nothing, even then there would be something, namely that lack of anything, that very same Nothing, or nothingness -- the state of affairs consisting in there being nothing')? Some people have also argued that the meaning of negation is not adequately accounted for by taking it to be a purely syncategorematic symbol. But then what is that entity, the not? Finally, within analytic philosophy itself some accounts are whether explicitly or implicitly committed to posit an entity which is Nothing[ness]. Thus Frege's semantics entails that within a formula such as `Nothing is a unicorn', the segment `Nothing is' means a second-order «concept» (property), viz. that of being a [first-order] empty property. But such a second-order property exists. One of the criticisms such an account of Nothing[ness] has prompted is that it jettisons Parmenides' saying that Nothingness is not.

Can such conflicting considerations be all of them duly taken into account or even somehow or other merged into a unified treatment? If that is possible at all, the approach which would alone be able to perform the task would most probably be a dialectical metaphysics according to which the particle `not' stands for an entity which both [up to a point] exists and yet [to some extent] fails to exist; insofar as it is a negative principle -- a root of deprivation, of lacking, of failing to be -- it is nonexistent, but its nonexistence is not absolute. Such a Neo-Neoplatonistic approach has been tried to be made viable through a paraconsistent logic. But some critics have maintained that there is no need for any such solution, Carnap or Frege having already finally elucidated the issue.



by Lorenzo Peña

The word `dialectics' has been employed to convey quite different meanings. Its pristine sense, especially in Plato's work, was the method of doing philosophy by exchanging questions and answers. Aristotle in the Topics understood dialectics as an art of reasoning subject only to constraints enforcing plausibility rather than cogency, the latter pertaining to logic proper. Later, however, `dialectics' came to mean just logic, and it was in this sense that the word was used through most of the Middle Ages and the Renaissance.

The sense of `dialectics' prevailing nowadays originates, if in a roundabout way, with Kant, for whom dialectics was the study of the ideas of pure reason in their transcendental usage, that is to say when they are used beyond their merely regulative role, thus being assigned a cognitive status which does not belong to them. One of the divisions of such dialectics was the study of the antinomies of pure reason, which are contradictions ensuant upon a transcendental use of the idea world. This is how the word `dialectics' came, after Kant, and especially in Hegel's work, to mean the disclosing of insurmountable contradictions.

Kant thought such contradictions are insurmountable only insofar as, owing to a transcendental illusion, people are bent on stretching the use of reason beyond its proper scope. They take it to provide knowledge rather than merely regulative, research-guiding, standards or ideals. Hegel, while agreeing that those antinomies do indeed arise, argued that, for one thing, countless many other contradictions, too, emerge as true, and, for another thing, no contradiction is to be avoided -- since it was only a certain tender-heartedness towards reality which debarred Kant from recognizing that contradictions are in fact present in the real world, instead of being the creation of a purely subjective view of the mind.

Hegel, too, granted that the contradictions are to be overcome, but overcoming or Aufheben, as he understood it, was by no means the same as dispelling or eliminating. He conceived it rather as a process by which that which is overcome is at the same time annulled or cancelled and yet also kept, even enhanced, brought to a higher level. Dialectics is according to Hegel the view, proper to what he calls `negative reason', which -- as against mere understanding -- is able to uncover the contradictions in things.

Such Hegelian views were not without precedent. Forerunners thereof can be seen, e.g., in Heraclitus and Ænesidemus. Similar views are also present in some of Plato's last dialogues, especially the Parmenides and Sophist, as well as in the Neoplatonist tradition, as represented mainly by Plotinus, Proclus, the Areopagiticum Corpus, John Scotus Eriugena and, to a lesser extent, Augustine. But the main anticipation of the Hegelian thesis of the contradictoriness of the world is to be found in the work of the last great Neoplatonist, Nicholas de Cusa (1400-64), who extolled the understanding of the coincidence of opposites in God as the summit of human thought. This he showed to require a new, non-Aristotelian, logic, within which the principle of non-contradiction would be negated, but not rejected -- it would incorporate a copulative approach, according to which both the est et non est and its negation, the nec est nec non est, would be asserted and combined into a new kind of speech.

Some post-Hegelian thinkers, especially Marxists, have tried to rescue Hegelian dialectics, while discarding the Hegelian system, taken as a whole, as dross. Although no unanimity has been reached among the interpreters concerning the gist of the dialectical views put forward by authors such as Marx, Engels or Lenin, they can be cogently argued to have stood by the Hegelian belief in the existence of true contradictions. They are not alone in maintaining such a view. Another school in contemporary philosophy which has also upheld dialectics in the sense of asserting the reality of contradictory truths is the kind of energetism espoused by the Rumanian philosopher Stephane Lupasco and his French disciple Marc Beigbeder.

Finally a revival of dialectical thought has been brought about by the construction of so-called paraconsistent systems of logic. Thus, e.g., some logicians have alleged that the dialectical principle of the unity of opposites can be regarded as a defensible proposal within a paraconsistent formal framework.

Likewise, the author of the present article has argued for a dialectical metaphysics which, while agreeing with the Hegelian view that there are contradictory truths, articulates such a view within an entirely different framework, putting forward a quantitative dialectics by stressing that true contradictions are always ensuant upon inbetweenness, i.e. upon the existence of degrees of existence or truth, intermediary between absolute truth and complete falsehood. (Non.sentential -- i.e. ontological -- truth is regarded as nothing else but the existence of states of affairs.)

This dialectical approach -- whose thrust is to be regarded as carrying to its ultimate consequences (something akin to) the Leibnizian principle of continuity -- has been articulated through a paraconsistent, infinite-valued system of logic, which its author has claimed to be the fulfilment of Cusa's project. With such an articulation, which is still in progress, the system has envolved into an axiomatic (fuzzy) set-theory, with modal, temporal, doxastic and deontic extensions. It has been offered as a viable solution to difficulties such as the sorites and a number of related paradoxes -- e.g. value and duty conflicts in ethics. On the other hand, some critics (e.g. da Costa 1989) have pointed out that the formal system thus constructed is both hard to master or assess and undecidable, and that the philosophical approach is fraught with heavy ontological commitments.