Download presentation
Presentation is loading. Please wait.
1
Phonological constraints as filters in SLA Raung-fu Chung rfchung@mail.nsysu.edu.tw
2
1. Introduction The main components of this article are: 1. The framework of Optimality Theory 2. Acquisition and learnability in OT 3. Our model 4. Concluding remarks
3
2. The framework of Optimality Theory (1) The model of OT
4
For instance, the English morpheme for plurals /s/ can be realized either as [s], [z], depending on the preceding sound of the stem:
5
(2) cat[k ]cats[k s] dog[d g] dogs[d gz] hen[hen]hens[henz] The input form is taken to be [s]. Then we propose the following constraint: Voiced Obstruent Prohibition )。
6
(3) Voiced Obstruent Prohibition , VOP No obstruents can be voiced.
7
Another constraint called for is: (4) Obstruent Voicing Harmony , OVH The adjacent obstruents should share the same value for [voice] The third constraint is a universal constraint, which is one component of IO, here referred as Ident- IO(voice) Ident= identical, IO= Input and Output) (5) Ident-IO(voice) The value for [voice] feature of the Output should be identical with tha of the Input.
8
As for the ranking, it is obvious, as shown below. (6) OVH >> VOP ( 「 >> 」 = be preceded or be prior to) Adding to IO, we have the following raking for all the three constraints we just proposed. (54) OVH>> Id-IO(voice)>> VOP
9
(55) /d g-z/ OVHId-IO(voice)VOP a. d g-z *** b. d g-s *!*** c. d k-s *!**
10
3. Acquisition and learnability in OT
11
3.1 The notion of learnability a. The formal learnability in the sense of Tesar & Smolensky, 1993 assumed that “all constraints started out being unranked.” In later empirical studies (e.g. Gnanadesikan, 1996; Levelt, 1995), it is pointed out that outputs are initially governed by markedness constraints, rather than by faithfulness constraints. This leads to the proposal that in the initial state of the grammar, all markedness constraints outrank all faithfulness constraints, or “M >> F” for short (Kager, Pater & Zonneveld, 2004; Hayes, 2004; Prince & Tesar, 2004).
12
b. There are two algorithms accounting for learnability of constraint rankings: Constraint Demotion Algorithm (CDA) and Gradual Learning Algorithm (GLM). Constraint Demotion Algorithm (CDA), proposed by Tesar & Smolensky (1993, 1998, 2000), ranks a set of constraints based on the positive input. For example, L1 acquisition can be interpreted as constraint demotion (Tesar & Smolensky, 1996)
13
c. Gradual Learning Algorithm (GLA), developed in Boersma (1997, 1998) and Boersma & Hayes (2001), handles variation in the input and explains gradual well- formedness. GLA is helpful in accounting for categorization errors a learner makes in both production and perception. L2 learners with restricted constraint sets have to gradually learn to rerank the constraints by raising or lowering the existing ones.
14
4. Our model L1 Filter Hypothesis & OT: L2 Input UG (UC) L1 filter L2 Output (native- like) Native- like ranking Interlanguage- ranking Interlanguage- Output Constraint- reranking Interlang uage- ranking L1 filter
15
Empirical arguemtns: An VOT-baased analysis of VOT production by Taiwanese EFL learners An diphthong construction of Mandarin and English for Taiwanese learners Errors of production of [yi] and [wu] for Mandarin EFL leasrners
16
An OT-based Analysis of VOT Production by Taiwanese EFL Learners Note: NSE: native speakers of English; HEFL: high proficient EFL learners; LEFL: low proficient EFL learners; MAN: Mandarin; SM: Southern Min Acoustic values of VOT: (Liou, 2005)
17
Constraints for VOT: 1. the *CATEG(ORIZE) family, which punishes productive categories with certain acoustic values. For example, *CATEG(VOT: /91.5ms/) is against producing /91.5ms/ as a particular category. 2. *WARP family, which demands every segment be produced as a member of the most similar available category. For instance, *WARP(VOT: 9.3ms) requires that an acoustic segment with a VOT of 91.5ms should not be produced as any VOT ‘category’ that is 9.3ms off (or more), i.e. as /82.2ms/ or /100.8ms/ or anything even farther away.
18
Constraint-ranking for English [ph] by NSE Tableau 1 English [ph] of NSE
19
Constraint-ranking for [ ㄆ ] by Taiwanese EFL learners Tableau 2 Mandarin [ ㄆ ] by Taiwanese EFL learners [75.4 ms] Intended=( ㄆ ) *CATEG (/82.2/) *CATEG (/91.5/) *WARP (16.1) *WARP (6.8) *CATEG (/75.4/) a. [91.5ms]*!* b. [82.2ms]*!* c. [75.4ms] *
20
Constraint-ranking for Interlanguage [ph] by Taiwanese EFL learners Tableau 3 Interlanguage [ph] by HEFL [91.5 ms] Intended=[p h ] *WARP (16.1) *CATEG (/91.5/) *CATEG (/75.4/) *CATEG (/82.2/) *WARP (9.3) a. [91.5ms]*! b. [82.2ms] ** ** c. [75.4ms] *!
21
Tableau 4 Interlanguage [ph] by LEFL [91.5 ms] Intended=[p h ] *WARP (16.1) *CATEG (/91.5/) *CATEG (/75.4/) *CATEG (/78.7/) *WARP (12.8) a. [91.5ms]*! b. [78.7ms] ** ** c. [75.4ms] *!
22
An OT-based Analysis of Mandarin and English diphthongs for Taiwanese EFL (MSL) learners
23
(1) Mandarin vowelsSM vowels i u_ ui u e o e o a a
24
(2) ie ie ( 也 ) 、 t ie ( 姐 ) ei pei ( 背 ) 、 kei ( 給 ) Mandarin diphthong construcion principle Front back iu e o [-back] vowels for: [+back] vowels for: (3) (4) uo uo ( 窩 ) 、 kuo ( 郭 ) ou ou( 歐 ) 、 kou ( 溝 )
25
(4) *N (N= 韻母, = 相同的 ) [ 後 ] [ - 後 ]
26
(5) SM vowels front back i u e o a Different back featues for: (6) (7) (8) iu iu ( 憂 ) 、 kiu ( 求 ) ui ui ( 胃 ) 、 kui ( 貴 ) io 例字: kio ( 橋 ) io ( 搖 ) ue 例字: kue ( 過 ) hue ( 血 )
27
(9) *N (N=nucleus , =same , *=ungrammatical) [ 後 ] [ 後 ]
28
Inputviolationadjustmentresultssamples a. /ie/ Same front features [i]deleted[e] 「演」唸成 [en] 「夜」唸成 [e] b. /ei/ Same front features [i] deleted[e] 「杯」唸成 [pe] 「給」唸成 [ke] c. /uo/ Same back features [u] deleted[o] 「我」唸成 [o] 「多」唸成 [to] d. /ou/ Same back features [u] deleted[o] 「歐」唸成 [o] 「都」唸成 [to]
29
5. Concluding remarks a.theoretical implications b.empirical supports
30
The end
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.