Why phonology is different




















Table 3. Predictions: UG vs. Emergence Archangeli et al. This study required data with very specific properties. In addition to identifying languages with some pattern having both morphological and phonological restrictions, the languages had to be organized into searchable databases and there needed to be comparable control languages.

An appropriate pattern was found with Bantu height harmony. In many Bantu languages, verb suffixes alternate between high vowels and mid vowels, with the mid vowels occurring after other mid vowels. The pattern is described as morphologically restricted to verbs. The paradigm in Table 4 illustrates the pattern.

Table 4. Bantu Height Harmony in Ciyao Ngunga, The harmonic pattern leads to an expected skewing of the distribution of vowels in these languages: we expect even distribution of all V i … V j sequences except with three sequences, e…i, o…i , and o…u. Each of these three sequences is unexpected in test-case verbs but expected in the other two environments, Bantu nouns and the control languages.

Control cases with the same five vowels and no harmony were found in freelang. In the test words, Archangeli et al. These counts were used to determine the expected distribution of V 1 …V 2 sequences for each V 1 …V 2 pair in each language; for test languages, the data were further subdivided into nouns and verbs.

As noted above, of special interest are the sequences e…i, o…i , and o…u , each of which is expected to be underrepresented, given the harmony pattern. In all three cases, the control language averages are very close to 0, while the verbs in test languages average significantly below 0. At the same time, each of these key sequences is found in verbs, in some if not all of the test languages. As Archangeli et al. In all three cases, the unattested sequence is o…u ; it is not found in Chichewa, Ciyao, or Nkore-Kiga.

A close, tight fit between data and generalization would show no occurrences of these sequences in any of the languages. But in all cases, while the distribution of the key sequences in verbs is well-below the 0-line, the distance from the 0-line varies by language and by vowel sequence.

In short, we do not see the tight fit predicted by UG; instead we see gradient adherence to the pattern as predicted by EG. The expectation with morphological extension under EG is that the distribution of the three key sequences will also be depressed in nouns less than 0, but greater than the verbs ; UG expects these sequences to show normal random distribution near 0.

The facts support the EG hypothesis: There is a skewing toward under-representation of these sequences in nouns, though it is not as pronounced as in verbs. Furthermore, the more skewed the verb sequence is, the more skewed the noun sequence as well. In this section, we have summarized the argument in Archangeli et al.

These conditions express the grammatical generalizations that phonologists converge on, and so provide a means of discovering phonological patterns in a language without appeal to innate constraints or constraint or rule schema.

From these demonstrations, we conclude that the language-learning infant can discover and express phonological patterns in their language without appeal to innate linguistic universals, at least in the kinds of cases considered: The general strategy of attending to the frequency of different sequences leads to identifying and symbolizing patterns.

Our goal to this point has been to demonstrate the merit of Emergent Grammar: the predictions of EG fit the data better than do the predictions of UG. We turn now to a very different type of question, namely, the implications of EG for other aspects of grammar. That is, does the nature of an analysis change significantly if we adopt EG? In the next section, we argue that there are clear differences in the way a language is represented. In this section, we explore the prefix vowel patterns in Esimbi, a Tivoid language, a member of the Bantoid branch of Niger-Congo Stallcup, a , b ; Hyman, ; Coleman et al.

A standard generative approach to the pattern would be to assign underlying height values to roots, cause prefixes to harmonize with roots in terms of height, and then to neutralize all root vowels to high see, e. This results in surface opacity under the assumption that the prefix height is a phonological alternation because there is no surface phonological trigger for the prefix alternation.

Under EG, we ask first what the learner is likely to generalize based on frequency of category distributions. We then turn to the question of whether these generalizations resolve the opacity problem.

Without going into detail here, we assume that identifying morphs and classes of morphs in a concatenating language like Esimbi is a challenge that the learner has faced and overcome. See Archangeli and Pulleyblank, , Forthcoming a , b for those details. We start here with the point at which the learner has already started identifying nouns and verbs as distinct from each other, and is noting that phonologically different forms of verbs appear with different meanings.

As the data in Table 6 show, verb roots vary in length from 1 to 3 syllables. In short, root vowels are high; root vowels agree in frontness and in rounding. This pattern is further confirmed by inspection of nouns, representative examples given in Table 7 , which shows that this distribution of height and identity holds of all roots, not just of verbs. Table 6. Verbs with infinitive prefix Hyman, tone not included in source.

Table 7. Noun prefix and root vowels Hyman, , pp. Review of prefixes in Esimbi shows that any of the eight vowels may occur as a prefix, one property that distinguishes them from roots.

Our first set of generalizations, a—e below, captures the restrictions on roots, restrictions that do not extend to prefixes. We express the sequential conditions as unbounded restrictions on particular feature sequences Smolensky, ; Pulleyblank, ; Heinz, The prefixes are far more challenging.

In Tables 6 , 7 , we see that the correct form of the prefix depends in part on the particular prefix e. In figuring out the morphs of Esimbi, a further set of generalizations is possible, relating prefix morphs to each other.

This set of generalizations is definitive in some cases, shown in f—i , but in other cases options are available, as in j—l 3. Which prefix morph is selected depends on the root to which the prefix is attached, as summarized in Table 8.

As this point, we have identified lexical properties of both prefixes and roots in Esimbi. Roots are assigned to one of three sets, A, B, C, and as far as we can tell, the assignments are arbitrary.

That is, there is no phonological property of a root that could be used to determine which prefix occurs with that root. Prefixes are identified as a collection of morphs. What remains is to identify the generalizations by which roots select the appropriate morph from each set 4.

The general strategy we propose when selecting among alternatives is to identify the form that best fits whatever requirements there are for a given situation; for Esimbi prefixes, that means selection of the morph that best fits the requirements of the root to which it is attached. Essentially, with Set A roots, the root prefers a high and advanced vowel if possible, while with Set C roots, the preference is for a retracted vowel, preferably low.

With Set B roots, the root gives no guidance and so the most representative morph of the set is selected. Set A roots require the highest, most advanced morph of the set.

The key generalization for Set A roots is that these roots prefer that a prefix be high and be advanced, Table 9ii. As laid out in Archangeli and Pulleyblank , the grammatical expression of this kind of preference is part of the lexical representation of the verb roots.

For Esimbi, Set A is defined by a specified preference for a preceding high vowel and a preceding advanced vowel, Table 9iii. Defaults underlined are discussed in Sections 4. Our formal representation of selection, shown in Table 9 as well as in Tables 10 — 12 , bears similarities to Optimality Theoretic tableaux Prince and Smolensky, ; McCarthy, Differences lie in the nature of constraints learned vs.

Tables like those in Tables 9iv—vi are interpreted in a fashion similar to Optimality Theory tableaux Prince and Smolensky, , with the following differences. First, the upper left cell shows the morpho-syntactic features to be manifested in a phonological form see Archangeli and Pulleyblank, Forthcoming a for more on this point.

The thumbs up indicates the form selected, given the morphs and conditions. See Archangeli and Pulleyblank, Forthcoming b for deeper comparison and contrast. Table Analysis of prefix selection for Esimbi Set B words and the prefix set with the low vowel option. The selection generalization and the implementation of best-fit are summarized in Table 9vii. With Set C roots, the analysis is very similar; the key difference is that these roots select for low, retracted vowels in their prefixes. Examples are given in Table 10i.

In this case, the generalization is that low retracted vowels are preferred. In the absence of a low retracted vowel, either low or retracted vowels are preferred. Set C is defined and exemplified in Tables 10ii—iii. The selection generalization and the implementation are summarized in Table 10vii.

We consider the default effect first. Set B nouns are illustrated in Table 11i. We propose that Set B roots place no restrictions on morph vowels, leaving the selection to be determined for each affix by other criteria, such as the properties of the morph set itself. Since Set B roots do not impose any selectional restrictions on morph choice, the default form of each prefix is selected, illustrated in Table 11ii for o-ki TAIL.

Representative examples are given in Table 12i. Inspection of these forms reveals a familiar restriction: not only must vowels agree for backness in roots but also, as these data show, in words as well. These two restrictions, stated in Table 12ii , hold of words, and so can drive the selection among morphs. Where the morph set contains no morph satisfying the phonotactic condition, then the condition serves no deciding role. In this section, we consider how the default morph might be identified during acquisition.

While completely arbitrary designation of a default morph may be necessary in at least some instances, there is more that can be said in general. First, consider that the default morph must be in an elsewhere relation with selected morphs. While the selected morphs must have specific properties to match selectional criteria, there is no such requirement of the default morph.

We might therefore expect that in at least certain cases, default morphs would not yield as straightforwardly to a unique characterization. This is certainly true in the Esimbi case.

Independent of such selectional issues, we might expect default morphs to exhibit certain properties. For example, all else being equal, we would expect that if morphs differ in their frequency : the more frequent morph is the default morph.

While we consider this hypothesis reasonable, we do not have the data to assess it for Esimbi. An additional property we hypothesize to hold of default morphs is representability , that is, the default morph best represents the full set of morphs. Consider three cases. If there is a single morph in a set, then obviously that morph is fully representative of the set.

If there are two morphs, then it is impossible to speak of one or the other better representing the set as each morph represents an identical but opposite divergence from the set's putative default.

In such binary cases, we might refer to frequency to establish the default morph, but representability will be irrelevant. In cases with more than two morphs, however, we can assess overall properties of the morph set, and identify a particular morph as being representative of those properties.

In this way, computers process the language like our brains do. The same processes that occur in the mind of a human when producing and receiving language occur in machines. One example of machines decoding language is the popular intelligence system, Siri. Phonology is concerned with the abstract, whereas phonetics is concerned with the physical properties of sounds. Also refer to the Phonetics page to get a better idea of the differences and similarities between these two related areas of linguistics.

Phonemes are the meaningfully different sound units in a language the smallest units of sound. Add a comment. Active Oldest Votes. Improve this answer. Rashidi Ali. Rashidi 2 2 gold badges 3 3 silver badges 11 11 bronze badges. Just look in a dictionary. But none of this has anything to do with phonetics and phonology.

A very poor answer in fact. I just tried to explain the base consept — Ali. Yes, but you got it all wrong. Have you read my answer??? Yes, and I even commented on it. Show 2 more comments. Phonetics studies all the various sounds that people say. For example, here are the various vowel sounds found around the world: Because vowels are formed by the position of the tongue there are no strict divisions between the sounds.

Featured on Meta. Now live: A fully responsive profile.



0コメント

  • 1000 / 1000