[go: nahoru, domu]

Jump to content

Wikipedia:United States Education Program/Courses/Psychology of Language (Kyle Chambers)/Summaries: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Lcannaday (talk | contribs)
Kechambers (talk | contribs)
m Changed formatting from hard line to --- to correct word wrap problem
Line 3: Line 3:


==Speech Perception==
==Speech Perception==


______________________________________________


The Development of Phonemic Categorization in Children Aged 6-12 by Valerie Hazan and Sarah Barrett [[User:Lkientzle|Lkientzle]] ([[User talk:Lkientzle|talk]]) 15:38, 29 February 2012 (UTC)
The Development of Phonemic Categorization in Children Aged 6-12 by Valerie Hazan and Sarah Barrett [[User:Lkientzle|Lkientzle]] ([[User talk:Lkientzle|talk]]) 15:38, 29 February 2012 (UTC)
Line 21: Line 18:
This research is important because it indicates that although we seem to be born with an innate sense of how to process phonemes, and by an early age are quite good at it, we should not assume that a persons environment does not aid in the development of even more advanced capabilities in perception. It seems that we can “practice” this distinction and get better at it by being exposed to more instances that make us figure out how to categorize sounds to make sense of them.
This research is important because it indicates that although we seem to be born with an innate sense of how to process phonemes, and by an early age are quite good at it, we should not assume that a persons environment does not aid in the development of even more advanced capabilities in perception. It seems that we can “practice” this distinction and get better at it by being exposed to more instances that make us figure out how to categorize sounds to make sense of them.


---
___________________________________________________________________________


The Role of Audition in Infant Babbling by D. Kimbrough Oiler and Rebecca E. Euers [[User:Amf14|Amf14]] ([[User talk:Amf14|talk]]) 16:37, 21 February 2012 (UTC)
The Role of Audition in Infant Babbling by D. Kimbrough Oiler and Rebecca E. Euers [[User:Amf14|Amf14]] ([[User talk:Amf14|talk]]) 16:37, 21 February 2012 (UTC)
Line 38: Line 35:


[[User:Amf14|Amf14]] ([[User talk:Amf14|talk]]) 17:21, 28 February 2012 (UTC)
[[User:Amf14|Amf14]] ([[User talk:Amf14|talk]]) 17:21, 28 February 2012 (UTC)

---


The Impact of Developmental Speech and Language Impairments on the Acquisition of Literacy Skills by Melanie Schuele
The Impact of Developmental Speech and Language Impairments on the Acquisition of Literacy Skills by Melanie Schuele
Line 54: Line 53:
[[User:Katelyn Warburton|Katelyn Warburton]] ([[User talk:Katelyn Warburton|talk]]) 20:52, 28 February 2012 (UTC)
[[User:Katelyn Warburton|Katelyn Warburton]] ([[User talk:Katelyn Warburton|talk]]) 20:52, 28 February 2012 (UTC)


---


Longitudinal Infant Speech Perception in Young Cochlear Implant Users
Longitudinal Infant Speech Perception in Young Cochlear Implant Users
Line 69: Line 69:
Uhler, K., Yoshinaga-Itano, C., Gabbard, S., Rothpletz, A. M., & Jenkins, H. (2011). Longitudinal infant speech perception in young cochlear implant users. Journal Of The American Academy Of Audiology, 22(3), 129-142. doi:10.3766/jaaa.22.3.2 [[User:Kfinsand|Kfinsand]] ([[User talk:Kfinsand|talk]]) 02:13, 29 February 2012 (UTC)
Uhler, K., Yoshinaga-Itano, C., Gabbard, S., Rothpletz, A. M., & Jenkins, H. (2011). Longitudinal infant speech perception in young cochlear implant users. Journal Of The American Academy Of Audiology, 22(3), 129-142. doi:10.3766/jaaa.22.3.2 [[User:Kfinsand|Kfinsand]] ([[User talk:Kfinsand|talk]]) 02:13, 29 February 2012 (UTC)


---


The Role of Talker-Specific Information in Word Segmentation by Infants by Derek M. Houston and Peter W. Jusczyk
The Role of Talker-Specific Information in Word Segmentation by Infants by Derek M. Houston and Peter W. Jusczyk
Line 84: Line 85:
Houston, D. M., & Jusczyk, P. W. (2000). The role of talker-specific information in word segmentation by infants. Journal Of Experimental Psychology: Human Perception And Performance, 26(5), 1570-1582. doi:10.1037/0096-1523.26.5.1570 [[User:Smassaro24|Smassaro24]] ([[User talk:Smassaro24|talk]]) 06:53, 29 February 2012 (UTC)
Houston, D. M., & Jusczyk, P. W. (2000). The role of talker-specific information in word segmentation by infants. Journal Of Experimental Psychology: Human Perception And Performance, 26(5), 1570-1582. doi:10.1037/0096-1523.26.5.1570 [[User:Smassaro24|Smassaro24]] ([[User talk:Smassaro24|talk]]) 06:53, 29 February 2012 (UTC)


---



Positional effects in the Lexical Retuning of Speech Perception by Alexandra Jesse & James McQueen [[User:Lino08|Lino08]] ([[User talk:Lino08|talk]]) 15:01, 23 February 2012 (UTC)
Positional effects in the Lexical Retuning of Speech Perception by Alexandra Jesse & James McQueen [[User:Lino08|Lino08]] ([[User talk:Lino08|talk]]) 15:01, 23 February 2012 (UTC)
Line 96: Line 97:
Jesse, A. & McQueen J. (2011). Positional Effects in the Lexical Retuning of Speech Perception. Psychonomic Society, Inc. doi: 10.3758/s13423-011-0129-2
Jesse, A. & McQueen J. (2011). Positional Effects in the Lexical Retuning of Speech Perception. Psychonomic Society, Inc. doi: 10.3758/s13423-011-0129-2


---


Influences of infant-directed speech on early word recognition by Leher Singh, Sarah Nestor, Chandni Parikh, & Ashley Yull. [[User:Misaacso|Misaacso]] ([[User talk:Misaacso|talk]]) 01:11, 29 February 2012 (UTC)
Influences of infant-directed speech on early word recognition by Leher Singh, Sarah Nestor, Chandni Parikh, & Ashley Yull. [[User:Misaacso|Misaacso]] ([[User talk:Misaacso|talk]]) 01:11, 29 February 2012 (UTC)
Line 113: Line 115:
Singh, Leher, Nestor, Sarah, Parikh, Chandni, & Yull, Ashley. (2009). Influences of infant-directed speech on early word recognition. ''Psychology Press'', 14(6), 654-666. doi: 10.1080/15250000903263973 [[User:Misaacso|Misaacso]] ([[User talk:Misaacso|talk]]) 07:15, 1 March 2012 (UTC)
Singh, Leher, Nestor, Sarah, Parikh, Chandni, & Yull, Ashley. (2009). Influences of infant-directed speech on early word recognition. ''Psychology Press'', 14(6), 654-666. doi: 10.1080/15250000903263973 [[User:Misaacso|Misaacso]] ([[User talk:Misaacso|talk]]) 07:15, 1 March 2012 (UTC)


---


Early phonological awareness and reading skills in children with Down syndrome by Esther Kennedy and Mark Flynn
Early phonological awareness and reading skills in children with Down syndrome by Esther Kennedy and Mark Flynn
Line 125: Line 128:


Kennedy EJ, Flynn MC. Early phonological awareness and reading skills in children with Down syndrome. Down Syndrome Research and Practice. 2003;8(3);100-109. [[User:Lcannaday|Lcannaday]] ([[User talk:Lcannaday|talk]]) 00:48, 1 March 2012 (UTC)
Kennedy EJ, Flynn MC. Early phonological awareness and reading skills in children with Down syndrome. Down Syndrome Research and Practice. 2003;8(3);100-109. [[User:Lcannaday|Lcannaday]] ([[User talk:Lcannaday|talk]]) 00:48, 1 March 2012 (UTC)

_____________________________________________________________________________________________________
---

Modified Spectral Tilt Affects Older, but Not Younger, Infants’ Native-Language Fricative Discrimination by Elizabeth Beach & Christine Kitamura
Modified Spectral Tilt Affects Older, but Not Younger, Infants’ Native-Language Fricative Discrimination by Elizabeth Beach & Christine Kitamura


Line 140: Line 145:
Beach, E., & Kitamura, C. (2011). Modified spectral tilt affects older, but not younger, infants' native-language fricative discrimination. Journal Of Speech, Language, And Hearing Research, 54(2), 658-667. doi:10.1044/1092-4388(2010/08-0177)[[User:Mvanfoss|Mvanfoss]] ([[User talk:Mvanfoss|talk]]) 01:21, 1 March 2012 (UTC)
Beach, E., & Kitamura, C. (2011). Modified spectral tilt affects older, but not younger, infants' native-language fricative discrimination. Journal Of Speech, Language, And Hearing Research, 54(2), 658-667. doi:10.1044/1092-4388(2010/08-0177)[[User:Mvanfoss|Mvanfoss]] ([[User talk:Mvanfoss|talk]]) 01:21, 1 March 2012 (UTC)


---
__________________________________________________________________________________________________


Maternal Speech to Infants in a Tonal Language: Support for Universal Prosodic Features in Motherese
Maternal Speech to Infants in a Tonal Language: Support for Universal Prosodic Features in Motherese
Line 157: Line 162:
[[User:TaylorDrenttel|TaylorDrenttel]] ([[User talk:TaylorDrenttel|talk]]) 01:28, 1 March 2012 (UTC)
[[User:TaylorDrenttel|TaylorDrenttel]] ([[User talk:TaylorDrenttel|talk]]) 01:28, 1 March 2012 (UTC)


---


Stuffed toys and speech perception
Stuffed toys and speech perception


There is enormous variation in phonoeme pronunciation among speakers of the same language, and yet most speech perception models treat these variations as irrelevancies that are filtered out. In fact, these variations are correlated with the social characterisitcs of the speaker and listener — you change the way you speak depending on who you're talking to. Now, recent research shows that these variations go beyond just speakers: listeners actually perceive sounds differently depending on who they come from. Jennifer Hay and Katie Drager explored how robust this phenomenon is by testing if merely exposing New Zealanders to something Australian could modify their perceptions.
There is enormous variation in phoneme pronunciation among speakers of the same language, and yet most speech perception models treat these variations as irrelevancies that are filtered out. In fact, these variations are correlated with the social characteristics of the speaker and listener — you change the way you speak depending on who you're talking to. Now, recent research shows that these variations go beyond just speakers: listeners actually perceive sounds differently depending on who they come from. Jennifer Hay and Katie Drager explored how robust this phenomenon is by testing if merely exposing New Zealanders to something Australian could modify their perceptions.


Subjects heard the same sentences, with a random change in accent. The /I/ sound was modified to sound more like an Australian accent or like a New Zealand accent, and all subjects heard all variations. The only difference between the two groups was the type of stuffed animal present -- either a koala, for the Australian condition, or a kiwi, for the New Zealand condition. After hearing each sentence, participants wrote on an answer sheet if it sounded like an Australian speaker or a New Zealand speaker had read it.
Subjects heard the same sentences, with a random change in accent. The /I/ sound was modified to sound more like an Australian accent or like a New Zealand accent, and all subjects heard all variations. The only difference between the two groups was the type of stuffed animal present -- either a koala, for the Australian condition, or a kiwi, for the New Zealand condition. After hearing each sentence, participants wrote on an answer sheet if it sounded like an Australian speaker or a New Zealand speaker had read it.
Line 170: Line 176:
Hay, J., & Drager, K. (2010). Stuffed toys and speech perception. Linguistics, 48(4), 865-892. doi:10.1515/LING.2010.027 [[User:AndFred|AndFred]] ([[User talk:AndFred|talk]]) 03:12, 1 March 2012 (UTC)
Hay, J., & Drager, K. (2010). Stuffed toys and speech perception. Linguistics, 48(4), 865-892. doi:10.1515/LING.2010.027 [[User:AndFred|AndFred]] ([[User talk:AndFred|talk]]) 03:12, 1 March 2012 (UTC)


---


Infants listenfor more phonetic detail in speech perception than in word-learning task by Christine L. Stager & Janet F. Werker [[User:Hhoff12|Hhoff12]] ([[User talk:Hhoff12|talk]]) 04:00, 1 March 2012 (UTC)
Infants listenfor more phonetic detail in speech perception than in word-learning task by Christine L. Stager & Janet F. Werker [[User:Hhoff12|Hhoff12]] ([[User talk:Hhoff12|talk]]) 04:00, 1 March 2012 (UTC)


---


Phoneme Boundary Effect in Macque Monkeys
Phoneme Boundary Effect in Macque Monkeys
Line 188: Line 196:
Kuhl, P.K., Padden, D.M., (1982). Enhanced discriminability at the phonetic boundaries for the voicing feature in macaques. Perception and Psychophysics. doi: 10.3758/BF03204208
Kuhl, P.K., Padden, D.M., (1982). Enhanced discriminability at the phonetic boundaries for the voicing feature in macaques. Perception and Psychophysics. doi: 10.3758/BF03204208
[[User:Anelso|Anelso]] ([[User talk:Anelso|talk]])
[[User:Anelso|Anelso]] ([[User talk:Anelso|talk]])

---


This article was about speech remaining the same even if the extinction of carnonical acoustic phonemes of the spectrum. A portion of this perceptual flexibility can be attributed to modulation sensitivity in the auditory-to-phonetic projection. Three tests were conducted to estimate the effects of exposure to natural and sine-wave samples of speech in this kind of perceptual versatility Sine-waves are defined as synthesizing the voice differently and also by deleting particular phonemes.
This article was about speech remaining the same even if the extinction of carnonical acoustic phonemes of the spectrum. A portion of this perceptual flexibility can be attributed to modulation sensitivity in the auditory-to-phonetic projection. Three tests were conducted to estimate the effects of exposure to natural and sine-wave samples of speech in this kind of perceptual versatility Sine-waves are defined as synthesizing the voice differently and also by deleting particular phonemes.
Line 197: Line 207:
Remez, Robert E.; Dubowski, Kathryn R.; Broder, Robin S.; Davids, Morgana L.; Grossman, Yael S.; Moskalenko, Marina; Pardo, Jennifer S.; Hasbun, Sara Maria; Journal of Experimental Psychology: Human Perception and Performance, Vol 37(3), Jun, 2011. pp. 968-977.
Remez, Robert E.; Dubowski, Kathryn R.; Broder, Robin S.; Davids, Morgana L.; Grossman, Yael S.; Moskalenko, Marina; Pardo, Jennifer S.; Hasbun, Sara Maria; Journal of Experimental Psychology: Human Perception and Performance, Vol 37(3), Jun, 2011. pp. 968-977.
[[User:Gmilbrat|Gmilbrat]] ([[User talk:Gmilbrat|talk]])
[[User:Gmilbrat|Gmilbrat]] ([[User talk:Gmilbrat|talk]])

---


Miller, Joanne L,; Mondini, Michele; Grosjean, Francois; Dommergues, Jean-Yves; Language and Speech: Dialect Effects in Speech Perception: The Role of Vowel Duration in Parisian French and Swiss French, Vol 54(4), p. 467-485. [[User:Sek12|Sek12]] ([[User talk:Sek12|talk]])
Miller, Joanne L,; Mondini, Michele; Grosjean, Francois; Dommergues, Jean-Yves; Language and Speech: Dialect Effects in Speech Perception: The Role of Vowel Duration in Parisian French and Swiss French, Vol 54(4), p. 467-485. [[User:Sek12|Sek12]] ([[User talk:Sek12|talk]])
Line 239: Line 251:
[[User:Lkientzle|Lkientzle]] ([[User talk:Lkientzle|talk]]) 06:06, 8 March 2012 (UTC)
[[User:Lkientzle|Lkientzle]] ([[User talk:Lkientzle|talk]]) 06:06, 8 March 2012 (UTC)


---

________________________________________




Emotion Words Affect Eye Fixations During Reading (Graham G. Scott, Patrick J. O'Donnell, and Sara C. Sereno) [[User:Katelyn Warburton|Katelyn Warburton]] ([[User talk:Katelyn Warburton|talk]]) 21:49, 28 February 2012 (UTC)
Emotion Words Affect Eye Fixations During Reading (Graham G. Scott, Patrick J. O'Donnell, and Sara C. Sereno) [[User:Katelyn Warburton|Katelyn Warburton]] ([[User talk:Katelyn Warburton|talk]]) 21:49, 28 February 2012 (UTC)
Line 257: Line 266:
Scott, G. G., O'Donnell, P. J., & Sereno, S. C. (2012). Emotion words affect eye fixations during reading. Journal of Experimental Psychology: Learning, Memory, and Cognition, doi: 10.1037/a0027209
Scott, G. G., O'Donnell, P. J., & Sereno, S. C. (2012). Emotion words affect eye fixations during reading. Journal of Experimental Psychology: Learning, Memory, and Cognition, doi: 10.1037/a0027209


---


The Structural Organization of the Mental Lexicon and Its Contribution to Age-Related Declines in Spoken-Word Recognition [[User:Amf14|Amf14]] ([[User talk:Amf14|talk]]) 04:22, 29 February 2012 (UTC)
The Structural Organization of the Mental Lexicon and Its Contribution to Age-Related Declines in Spoken-Word Recognition [[User:Amf14|Amf14]] ([[User talk:Amf14|talk]]) 04:22, 29 February 2012 (UTC)
Line 274: Line 284:


[[User:Amf14|Amf14]] ([[User talk:Amf14|talk]]) 19:52, 6 March 2012 (UTC)
[[User:Amf14|Amf14]] ([[User talk:Amf14|talk]]) 19:52, 6 March 2012 (UTC)



Sommers, M. (1996). The structural organization of the mental lexicon and its contribution to age-related declines in spoken-word recognition. Psychology And Aging, 11(2), 333-341. Doi:10.1037/0882-7974.11.2.333
Sommers, M. (1996). The structural organization of the mental lexicon and its contribution to age-related declines in spoken-word recognition. Psychology And Aging, 11(2), 333-341. Doi:10.1037/0882-7974.11.2.333


---




Evidence for Sequential Processing in Visual Word Recognition (Peter J. Kwantes and Douglas J. K. Mewhort)
Evidence for Sequential Processing in Visual Word Recognition (Peter J. Kwantes and Douglas J. K. Mewhort)
Line 295: Line 303:
Kwantes, P. J., & Mewhort, D. K. (1999). Evidence for sequential processing in visual word recognition. Journal Of Experimental Psychology: Human Perception And Performance, 25(2), 376-381. doi:10.1037/0096-1523.25.2.376
Kwantes, P. J., & Mewhort, D. K. (1999). Evidence for sequential processing in visual word recognition. Journal Of Experimental Psychology: Human Perception And Performance, 25(2), 376-381. doi:10.1037/0096-1523.25.2.376
[[User:Smassaro24|Smassaro24]] ([[User talk:Smassaro24|talk]]) 16:22, 3 March 2012 (UTC)
[[User:Smassaro24|Smassaro24]] ([[User talk:Smassaro24|talk]]) 16:22, 3 March 2012 (UTC)

---


Syllabic Effects in Italian Lexical Access
Syllabic Effects in Italian Lexical Access
Line 306: Line 316:


Implications of this article allow readers to deepen their understanding of speech perception cross culturally. It also aids in understanding the complexities of human speech perception and how this can vary based on minute differences across languages.
Implications of this article allow readers to deepen their understanding of speech perception cross culturally. It also aids in understanding the complexities of human speech perception and how this can vary based on minute differences across languages.



Tagliapietra, L., Fanari, R. R., Collina, S. S., & Tabossi, P. P. (2009). Syllabic effects in Italian lexical access. Journal Of Psycholinguistic Research, 38(6), 511-526. doi:10.1007/s10936-009-9116-4
Tagliapietra, L., Fanari, R. R., Collina, S. S., & Tabossi, P. P. (2009). Syllabic effects in Italian lexical access. Journal Of Psycholinguistic Research, 38(6), 511-526. doi:10.1007/s10936-009-9116-4
[[User:Kfinsand|Kfinsand]] ([[User talk:Kfinsand|talk]]) 06:22, 6 March 2012 (UTC)
[[User:Kfinsand|Kfinsand]] ([[User talk:Kfinsand|talk]]) 06:22, 6 March 2012 (UTC)

---


Everyday, people say sentences that activate part of the brain. In these sentences we use contextual information that will help reactivate specific words to gain meaning behind the sentence. In this specific article, the experimenters were curious of the certain activation type of the word processing. Specifically, they were concerned with the entities of the words vertical space (ex. roof vs. root).
Everyday, people say sentences that activate part of the brain. In these sentences we use contextual information that will help reactivate specific words to gain meaning behind the sentence. In this specific article, the experimenters were curious of the certain activation type of the word processing. Specifically, they were concerned with the entities of the words vertical space (ex. roof vs. root).
Line 321: Line 332:
[[User:Gmilbrat|Gmilbrat]] ([[User talk:Gmilbrat|talk]])
[[User:Gmilbrat|Gmilbrat]] ([[User talk:Gmilbrat|talk]])


---
_____________________________________________________________________________________________________________________________________________________________________________________


Semantic processing and the development of word recognition skills: Evidence from children with reading comprehension difficulties
Semantic processing and the development of word recognition skills: Evidence from children with reading comprehension difficulties
Line 338: Line 349:


Nation K., Snowling M. J. (1998). Semantic processing and the development of word recognition skills: Evidence from children with reading comprehension difficulties. Journal of Memory and Language, 39, 85–101. [[User:Lcannaday|Lcannaday]] ([[User talk:Lcannaday|talk]]) 16:16, 8 March 2012 (UTC)
Nation K., Snowling M. J. (1998). Semantic processing and the development of word recognition skills: Evidence from children with reading comprehension difficulties. Journal of Memory and Language, 39, 85–101. [[User:Lcannaday|Lcannaday]] ([[User talk:Lcannaday|talk]]) 16:16, 8 March 2012 (UTC)

_____________________________________________________________________________________________________________________________________________________________________________________
---


Speaker Variability Augments Phonological Processing in Early Word Learning by Gwyneth Rost and Bob McMurray[[User:Misaacso|Misaacso]] ([[User talk:Misaacso|talk]]) 20:02, 7 March 2012 (UTC)
Speaker Variability Augments Phonological Processing in Early Word Learning by Gwyneth Rost and Bob McMurray[[User:Misaacso|Misaacso]] ([[User talk:Misaacso|talk]]) 20:02, 7 March 2012 (UTC)
Line 352: Line 364:
Rost, C. Gwyneth and McMurray, Bob. (2009). Speaker variability augments phonological processing in early word learning. Developmental Science, 12(2), 339-349. doi: 10.1111/j.1467-7687.2008.00786.x [[User:Misaacso|Misaacso]] ([[User talk:Misaacso|talk]]) 06:46, 8 March 2012 (UTC)
Rost, C. Gwyneth and McMurray, Bob. (2009). Speaker variability augments phonological processing in early word learning. Developmental Science, 12(2), 339-349. doi: 10.1111/j.1467-7687.2008.00786.x [[User:Misaacso|Misaacso]] ([[User talk:Misaacso|talk]]) 06:46, 8 March 2012 (UTC)


---


Morphological Awareness: A key to understanding poor reading comprehension in English
Morphological Awareness: A key to understanding poor reading comprehension in English
Line 368: Line 381:
Tong, X., Deacon, S., Kirby, J. R., Cain, K., & Parrila, R. (2011) Morphological Awareness: A key to understanding poor reading comprehension in English. Journal of Educational Psychology, 103(3), 523-534. Doi:10.1037/a0023495
Tong, X., Deacon, S., Kirby, J. R., Cain, K., & Parrila, R. (2011) Morphological Awareness: A key to understanding poor reading comprehension in English. Journal of Educational Psychology, 103(3), 523-534. Doi:10.1037/a0023495


---
______________________________________________________________________________________________________________________________

Foveal processing and word skipping during reading by Denis Drieghe
Foveal processing and word skipping during reading by Denis Drieghe


Line 378: Line 392:
Drieghe, D. (2008). Foveal processing and word skipping during reading. Psychonomic Bulletin & Review, 15(4), 856-860. doi:10.3758/PBR.15.4.856
Drieghe, D. (2008). Foveal processing and word skipping during reading. Psychonomic Bulletin & Review, 15(4), 856-860. doi:10.3758/PBR.15.4.856
[[User:Mvanfoss|Mvanfoss]] ([[User talk:Mvanfoss|talk]]) 00:22, 8 March 2012 (UTC)
[[User:Mvanfoss|Mvanfoss]] ([[User talk:Mvanfoss|talk]]) 00:22, 8 March 2012 (UTC)

---


Automatic Activation of Location During Word Processing
Automatic Activation of Location During Word Processing
Line 390: Line 406:


Lachmair, M., De Filippis, M. & Kaup, B. (2011). Root versus roof: automatic activation of location inormation during word processing. Psychonomic Society, Inc. doi:10.3758/s13423-011-0158-x [[User:Lino08|Lino08]] ([[User talk:Lino08|talk]]) 05:54, 8 March 2012 (UTC)
Lachmair, M., De Filippis, M. & Kaup, B. (2011). Root versus roof: automatic activation of location inormation during word processing. Psychonomic Society, Inc. doi:10.3758/s13423-011-0158-x [[User:Lino08|Lino08]] ([[User talk:Lino08|talk]]) 05:54, 8 March 2012 (UTC)

______________________________________________________________________________________________________________________________
---


Brown, Susan W. (2008). Polysemy in the mental lexicon. Colorado Research in Linguistics, 21, 1-12.
Brown, Susan W. (2008). Polysemy in the mental lexicon. Colorado Research in Linguistics, 21, 1-12.
[[User:Ahartlin|Ahartlin]] ([[User talk:Ahartlin|talk]]) 03:08, 8 March 2012 (UTC)
[[User:Ahartlin|Ahartlin]] ([[User talk:Ahartlin|talk]]) 03:08, 8 March 2012 (UTC)


---
______________________________________________________________________________________________________________________________


Lanthier, S. N., Risko, E. F., Stolz, J. A., & Besner, D. (2009). Not all visual features are created equal: Early processing in letter and word recognition. Psychonomic Bulletin & Review, 16(1), 67-73. doi:10.3758/PBR.16.1.67 [[User:Hhoff12|Hhoff12]] ([[User talk:Hhoff12|talk]]) 06:45, 8 March 2012 (UTC)
Lanthier, S. N., Risko, E. F., Stolz, J. A., & Besner, D. (2009). Not all visual features are created equal: Early processing in letter and word recognition. Psychonomic Bulletin & Review, 16(1), 67-73. doi:10.3758/PBR.16.1.67 [[User:Hhoff12|Hhoff12]] ([[User talk:Hhoff12|talk]]) 06:45, 8 March 2012 (UTC)
Line 411: Line 428:
All the research culminates by saying that verticies are an important feature.
All the research culminates by saying that verticies are an important feature.


---
______________________________________________________________________________________________________________________________


Word misperception, the neighbor frequency effect, and the role of sentence context: evidence from eye movements
Word misperception, the neighbor frequency effect, and the role of sentence context: evidence from eye movements
Line 423: Line 440:
Slattery, T. J. (2009). Word misperception, the neighbor frequency effect, and the role of sentence context: Evidence from eye movements. Journal Of Experimental Psychology: Human Perception And Performance, 35(6), 1969-1975. doi:10.1037/a0016894 [[User:AndFred|AndFred]] ([[User talk:AndFred|talk]]) 22:33, 8 March 2012 (UTC)
Slattery, T. J. (2009). Word misperception, the neighbor frequency effect, and the role of sentence context: Evidence from eye movements. Journal Of Experimental Psychology: Human Perception And Performance, 35(6), 1969-1975. doi:10.1037/a0016894 [[User:AndFred|AndFred]] ([[User talk:AndFred|talk]]) 22:33, 8 March 2012 (UTC)


---



Parise, Eugenio; Palumbo, Letizia; Handl, Andrea; Friderici, Angela D. (2011). Influence of Eye Gaze on Spoken Word Processing: An ERP Study With Infants. Child Development, May/June, Vol. 82, No. 3, pp. 842-853. [[User:Sek12|Sek12]] ([[User talk:Sek12|talk]])
Parise, Eugenio; Palumbo, Letizia; Handl, Andrea; Friderici, Angela D. (2011). Influence of Eye Gaze on Spoken Word Processing: An ERP Study With Infants. Child Development, May/June, Vol. 82, No. 3, pp. 842-853. [[User:Sek12|Sek12]] ([[User talk:Sek12|talk]])


---




Author used the boundary paradigm to test two hypotheses that might explain why no conclusive evidence has been found for the existence of n+2 preprocessing effects. Two experiments were used to argue the controversy result form previous researches. And the result explains why there were opposite results from similar experiments before.
Author used the boundary paradigm to test two hypotheses that might explain why no conclusive evidence has been found for the existence of n+2 preprocessing effects. Two experiments were used to argue the controversy result form previous researches. And the result explains why there were opposite results from similar experiments before.
Line 441: Line 457:


Angele B, Rayner K. Parafoveal processing of word n + 2 during reading: Do the preceding words matter?. Journal Of Experimental Psychology: Human Perception And Performance [serial online]. August 2011;37(4):1210-1220. Available from: PsycINFO, Ipswich, MA. Accessed March 9, 2012. [[User:Zc.annie|Zc.annie]] (User talk:Zc.annie|talk]])
Angele B, Rayner K. Parafoveal processing of word n + 2 during reading: Do the preceding words matter?. Journal Of Experimental Psychology: Human Perception And Performance [serial online]. August 2011;37(4):1210-1220. Available from: PsycINFO, Ipswich, MA. Accessed March 9, 2012. [[User:Zc.annie|Zc.annie]] (User talk:Zc.annie|talk]])

---


Semantic Facilitation
Semantic Facilitation
Line 459: Line 477:


==Sentence Processing==
==Sentence Processing==


________________________________________________________________________


Sentence Comprehension and General Working Memory
Sentence Comprehension and General Working Memory


Working memory is defined in this study as the “workspace in which information is maintained and manipulated for immediate use in reasoning and comprehension tasks.” (Moser, Fridriksson, & Healy, 2007). This information is later turned into permanent information that can be retrieved at a later date. Studies in the past have shown a correlation between sentence processing and the working memory, but the interpretation of this relationship has remained unclear. The present study sought to understand this correlation better. This is an important study because there is disagreement in the cognitive construct of the working memory. Specifically, some researchers believe that there is a separate working memory for mediating sentence comprehension, while others believe it is all in one confined network. There are also clinical implications for this research that include treating aphasia patients that generally have difficulty matching sentences to pictures and deriving themes from sentences. This is said to be a symptom of the Broca’s area, and more research on the working memory and sentence processing could hopefully aid in aphasia research by localizing where these processes are occurring. In general, previous research has focused on verbal working memory tasks through a reading span task that has participants judge how truthful a statement is after hearing it, and then say the last word of as many of the sentences as possible. The problem with this type of task is that sentence processing and reading span tend to overlap, and therefore research needs to analyze nonverbal working memory tasks as well.
Working memory is defined in this study as the “workspace in which information is maintained and manipulated for immediate use in reasoning and comprehension tasks.” (Moser, Fridriksson, & Healy, 2007). This information is later turned into permanent information that can be retrieved at a later date. Studies in the past have shown a correlation between sentence processing and the working memory, but the interpretation of this relationship has remained unclear. The present study sought to understand this correlation better. This is an important study because there is disagreement in the cognitive construct of the working memory. Specifically, some researchers believe that there is a separate working memory for mediating sentence comprehension, while others believe it is all in one confined network. There are also clinical implications for this research that include treating aphasia patients that generally have difficulty matching sentences to pictures and deriving themes from sentences. This is said to be a symptom of the Broca’s area, and more research on the working memory and sentence processing could hopefully aid in aphasia research by localizing where these processes are occurring. In general, previous research has focused on verbal working memory tasks through a reading span task that has participants judge how truthful a statement is after hearing it, and then say the last word of as many of the sentences as possible. The problem with this type of task is that sentence processing and reading span tend to overlap, and therefore research needs to analyze nonverbal working memory tasks as well.


The present research is hoping to build on previous research on this topic by introducing a nonverbal working memory task to see if working memory is not just language based (verbal), but generally based (including also nonverbal). If sentence processing is correlated also with nonverbal working memory, then we can assume that there is not a separate area responsible for this outside of other “types” of working memory.
The present research is hoping to build on previous research on this topic by introducing a nonverbal working memory task to see if working memory is not just language based (verbal), but generally based (including also nonverbal). If sentence processing is correlated also with nonverbal working memory, then we can assume that there is not a separate area responsible for this outside of other “types” of working memory.


Sentence-Parsing (SP), Lexical Decision (LD), and non-verbal working memory tasks have been used in the past to gain understanding on how humans interpret language in different ways, and were all utilized in the present study to uncover how the working memory effected the participant’s sentence processing. The lexical decision task presented participants with both “real” words and “non-words” and required them to respond as quickly and accurately as possible to which it was. This was used as a control task.The sentence-parsing task presented the participants with both semantically-plausible (i.e. The goalie the team admired retired) and non-plausible sentences (i.e. The candy that the boy craved ate) on a computer screen and the participants (upon the onset of the last word) had to choose whether it was plausible or not by answering “yes” or “no”.
Sentence-Parsing (SP), Lexical Decision (LD), and non-verbal working memory tasks have been used in the past to gain understanding on how humans interpret language in different ways, and were all utilized in the present study to uncover how the working memory effected the participant’s sentence processing. The lexical decision task presented participants with both “real” words and “non-words” and required them to respond as quickly and accurately as possible to which it was. This was used as a control task.The sentence-parsing task presented the participants with both semantically-plausible (i.e. The goalie the team admired retired) and non-plausible sentences (i.e. The candy that the boy craved ate) on a computer screen and the participants (upon the onset of the last word) had to choose whether it was plausible or not by answering “yes” or “no”.
The nonverbal working memory task involved in this study consisted of presenting Chinese characters to participants on a page and then later asking them to use those characters in a simple equation. They would have to decide if the equation’s solution was correct by selecting either a “no” or “yes” button. 60 of these trials were completed checking for both accuracy and reaction times.
The nonverbal working memory task involved in this study consisted of presenting Chinese characters to participants on a page and then later asking them to use those characters in a simple equation. They would have to decide if the equation’s solution was correct by selecting either a “no” or “yes” button. 60 of these trials were completed checking for both accuracy and reaction times.


Participants individually completed these tasks online, and then an additional nonverbal working memory task was given offline. The independent variables were the four different tasks the participant was made to do (nonverbal working memory task, SP task, LD task all of which were online, and an additional nonverbal working memory task performed offline). The dependent variables of this study were the reaction times of each task, as well as the accuracy in completing each task.
Participants individually completed these tasks online, and then an additional nonverbal working memory task was given offline. The independent variables were the four different tasks the participant was made to do (nonverbal working memory task, SP task, LD task all of which were online, and an additional nonverbal working memory task performed offline). The dependent variables of this study were the reaction times of each task, as well as the accuracy in completing each task.


The present study found that nonverbal working memory is correlated with sentence processing, suggesting that the ability to do sentence parsing is in relation to a GENERAL working memory capacity, and is not just language specific. This is what the researchers had hypothesized in the beginning of their study, but the correlation that was yielded was only of moderate effect (.51 on average). This study reinforces the idea that there is just one single working memory capacity that is responsible for all types of language processing, not multiple working memories responsible for different tasks.
The present study found that nonverbal working memory is correlated with sentence processing, suggesting that the ability to do sentence parsing is in relation to a GENERAL working memory capacity, and is not just language specific. This is what the researchers had hypothesized in the beginning of their study, but the correlation that was yielded was only of moderate effect (.51 on average). This study reinforces the idea that there is just one single working memory capacity that is responsible for all types of language processing, not multiple working memories responsible for different tasks.


Moser, D. C., Fridriksson, J., & Healy, E. W. (2007). Sentence comprehension and general working memory. Clinical Linguistics & Phonetics, 21(2), 147-156. doi:10.1080/02699200600782526
Moser, D. C., Fridriksson, J., & Healy, E. W. (2007). Sentence comprehension and general working memory. Clinical Linguistics & Phonetics, 21(2), 147-156. doi:10.1080/02699200600782526
[[User:Lkientzle|Lkientzle]] ([[User talk:Lkientzle|talk]]) 03:21, 15 March 2012 (UTC)
[[User:Lkientzle|Lkientzle]] ([[User talk:Lkientzle|talk]]) 03:21, 15 March 2012 (UTC)


---
_________________________________________________________________________________________________





TAKING PERSPECTIVE IN CONVERSATION: The Role of Mutual Knowledge in Comprehension by Boaz Keysar, Dale J. Barr, Jennifer A. Balin, and Jason S. Brauner [[User:Amf14|Amf14]] ([[User talk:Amf14|talk]]) 19:05, 6 March 2012 (UTC)
TAKING PERSPECTIVE IN CONVERSATION: The Role of Mutual Knowledge in Comprehension by Boaz Keysar, Dale J. Barr, Jennifer A. Balin, and Jason S. Brauner [[User:Amf14|Amf14]] ([[User talk:Amf14|talk]]) 19:05, 6 March 2012 (UTC)
Line 496: Line 508:
These findings suggest that although mutual knowledge is necessary in a conversation, the knowledge does not restrict a person’s interpretation. They still consider all possible referents, causing them to be forced to use mutual knowledge in comprehension. This causes a delay in understanding.
These findings suggest that although mutual knowledge is necessary in a conversation, the knowledge does not restrict a person’s interpretation. They still consider all possible referents, causing them to be forced to use mutual knowledge in comprehension. This causes a delay in understanding.


---




Eye Movements of Young and Older Adults During Reading
Eye Movements of Young and Older Adults During Reading
Line 512: Line 523:
Kemper, S., & Liu, C. (2007). Eye movements of young and older adults during reading. Psychology And Aging, 22(1), 84-93. doi:10.1037/0882-7974.22.1.84 [[User:Kfinsand|Kfinsand]] ([[User talk:Kfinsand|talk]]) 03:00, 8 March 2012 (UTC)
Kemper, S., & Liu, C. (2007). Eye movements of young and older adults during reading. Psychology And Aging, 22(1), 84-93. doi:10.1037/0882-7974.22.1.84 [[User:Kfinsand|Kfinsand]] ([[User talk:Kfinsand|talk]]) 03:00, 8 March 2012 (UTC)


---


Effects of Age, Speech of Processing, and Working Memory on Comprehension of Sentences With Relative Clauses (Caplan, DeDe, Waters, Michaud) [[User:Katelyn Warburton|Katelyn Warburton]] ([[User talk:Katelyn Warburton|talk]]) 20:46, 8 March 2012 (UTC)
Effects of Age, Speech of Processing, and Working Memory on Comprehension of Sentences With Relative Clauses (Caplan, DeDe, Waters, Michaud) [[User:Katelyn Warburton|Katelyn Warburton]] ([[User talk:Katelyn Warburton|talk]]) 20:46, 8 March 2012 (UTC)
Line 527: Line 539:


Caplan, D., DeDe, G., Waters, G., & Michaud, J. (2011). Effects of age, speed of processing, and working memory on comprehension of sentences with relative clauses. Psychology and Aging, 26(2), 439-450.
Caplan, D., DeDe, G., Waters, G., & Michaud, J. (2011). Effects of age, speed of processing, and working memory on comprehension of sentences with relative clauses. Psychology and Aging, 26(2), 439-450.

---


Pay now or pay later: Aging and the role of boundary salience in self-regulation of conceptual integration in sentence processing [[User:Smassaro24|Smassaro24]] ([[User talk:Smassaro24|talk]]) 18:46, 10 March 2012 (UTC)
Pay now or pay later: Aging and the role of boundary salience in self-regulation of conceptual integration in sentence processing [[User:Smassaro24|Smassaro24]] ([[User talk:Smassaro24|talk]]) 18:46, 10 March 2012 (UTC)
Line 541: Line 555:


Stine-Morrow, E. L., Shake, M. C., Miles, J. R., Lee, K., Gao, X., & McConkie, G. (2010). Pay now or pay later: Aging and the role of boundary salience in self-regulation of conceptual integration in sentence processing. Psychology And Aging, 25(1), 168-176. doi:10.1037/a0018127
Stine-Morrow, E. L., Shake, M. C., Miles, J. R., Lee, K., Gao, X., & McConkie, G. (2010). Pay now or pay later: Aging and the role of boundary salience in self-regulation of conceptual integration in sentence processing. Psychology And Aging, 25(1), 168-176. doi:10.1037/a0018127

---


Effects of Age, Speed of Processing, and Working Memory on Comprehension of Sentences
Effects of Age, Speed of Processing, and Working Memory on Comprehension of Sentences
Line 551: Line 567:
After analyzing the results of their two experiments, the researchers found that working memory and speed of processing were positively correlated. They also found that accuracy of sentence comprehension was positively correlated with working memory. The results from experiment one show that reading times for the cleft-object were longer than for the cleft-subject sentences and the reading times for subject-object sentences were longer than those for the subject-subject sentences. These effects of age and working memory on accuracy and reading times correspond with results from previous studies. The researchers found that the correlations between age, speed of processing, and working memory to comprehension measures followed what they predicted – that they would increase with age and decrease with faster speed of processing and a larger working memory. Their analysis showed that older individuals spend more time reading/processing the sentence but had poorer comprehension of the sentences. This research shows that age does have an effect on sentence comprehension and that more research should be done to see if there is anything the aging population can do in order to slow this decline in comprehension ability so that they can remain functional in society for longer.
After analyzing the results of their two experiments, the researchers found that working memory and speed of processing were positively correlated. They also found that accuracy of sentence comprehension was positively correlated with working memory. The results from experiment one show that reading times for the cleft-object were longer than for the cleft-subject sentences and the reading times for subject-object sentences were longer than those for the subject-subject sentences. These effects of age and working memory on accuracy and reading times correspond with results from previous studies. The researchers found that the correlations between age, speed of processing, and working memory to comprehension measures followed what they predicted – that they would increase with age and decrease with faster speed of processing and a larger working memory. Their analysis showed that older individuals spend more time reading/processing the sentence but had poorer comprehension of the sentences. This research shows that age does have an effect on sentence comprehension and that more research should be done to see if there is anything the aging population can do in order to slow this decline in comprehension ability so that they can remain functional in society for longer.

---


Caplan, D., DeDe, G., Waters, G., & Michaud, J. (2011). Effects of Age, Speed of Processing, and Working Memory on Comprehension of Sentences with Relative Clauses. Journal of Psychology and Aging 26 (2) 439-450. doi:10.1037/a0021837 [[Special:Contributions/138.236.22.152|138.236.22.152]] ([[User talk:138.236.22.152|talk]]) 01:25, 13 March 2012 (UTC)
Caplan, D., DeDe, G., Waters, G., & Michaud, J. (2011). Effects of Age, Speed of Processing, and Working Memory on Comprehension of Sentences with Relative Clauses. Journal of Psychology and Aging 26 (2) 439-450. doi:10.1037/a0021837 [[Special:Contributions/138.236.22.152|138.236.22.152]] ([[User talk:138.236.22.152|talk]]) 01:25, 13 March 2012 (UTC)
Line 560: Line 578:
The authors believe that this experiment accurately portrays the perceptual grouping process. These results suggest that the competitor word was more likely to be activated when the target word is stated. There data supports the notion that natural utterances are more easily processed compared to unnatural utterances.
The authors believe that this experiment accurately portrays the perceptual grouping process. These results suggest that the competitor word was more likely to be activated when the target word is stated. There data supports the notion that natural utterances are more easily processed compared to unnatural utterances.
In the future the experimenters want to work on natural conversational speech instead of sentences that are rarely stated. They also want to work on the prosody of the natural sentences too.
In the future the experimenters want to work on natural conversational speech instead of sentences that are rarely stated. They also want to work on the prosody of the natural sentences too.

---


Brown, Meredith; Salverda, Anne Pier; Dilley, Laura C.; Tanenhaus, Michael K.; Psychonomic Bulletin & Review, Vol 18(6), Dec, 2011. pp. 1189-1196.
Brown, Meredith; Salverda, Anne Pier; Dilley, Laura C.; Tanenhaus, Michael K.; Psychonomic Bulletin & Review, Vol 18(6), Dec, 2011. pp. 1189-1196.
[[User:Gmilbrat|Gmilbrat]] ([[User talk:Gmilbrat|talk]])
[[User:Gmilbrat|Gmilbrat]] ([[User talk:Gmilbrat|talk]])


---


Sentence comprehension in young adults with developmental dyslexia By Wiseheart, Altmann, Park, and Lombardino. [[User:Misaacso|Misaacso]] ([[User talk:Misaacso|talk]]) 02:27, 14 March 2012 (UTC)
Sentence comprehension in young adults with developmental dyslexia By Wiseheart, Altmann, Park, and Lombardino. [[User:Misaacso|Misaacso]] ([[User talk:Misaacso|talk]]) 02:27, 14 March 2012 (UTC)
Line 578: Line 599:
Wiseheart, R., Altmann, L. J. P., Park, Heeyoung & Lombardino, L. J. (2009). Sentence comprehension in young adults with developmental dyslexia. Ann. of Dyslexia, 59, 151-167. doi: 10.1007/s11881-009-0028-7 [[User:Misaacso|Misaacso]] ([[User talk:Misaacso|talk]]) 05:28, 15 March 2012 (UTC)
Wiseheart, R., Altmann, L. J. P., Park, Heeyoung & Lombardino, L. J. (2009). Sentence comprehension in young adults with developmental dyslexia. Ann. of Dyslexia, 59, 151-167. doi: 10.1007/s11881-009-0028-7 [[User:Misaacso|Misaacso]] ([[User talk:Misaacso|talk]]) 05:28, 15 March 2012 (UTC)


---


Arnold, J. E., Tanenhaus, M. K., Altmann, R. J., & Fagnano, M. (2004). The old and thee, uh, new: Disfluency and reference resolution. Psychological Science, 15(9), 578-582. doi:10.1111/j.0956-7976.2004.00723.x [[User:Hhoff12|Hhoff12]] ([[User talk:Hhoff12|talk]]) 14:40, 15 March 2012 (UTC)
Arnold, J. E., Tanenhaus, M. K., Altmann, R. J., & Fagnano, M. (2004). The old and thee, uh, new: Disfluency and reference resolution. Psychological Science, 15(9), 578-582. doi:10.1111/j.0956-7976.2004.00723.x [[User:Hhoff12|Hhoff12]] ([[User talk:Hhoff12|talk]]) 14:40, 15 March 2012 (UTC)

---


Processing Coordination Ambiguity by Engelhardt and Ferreira
Processing Coordination Ambiguity by Engelhardt and Ferreira
Line 592: Line 616:
Engelhardt, P. E., & Ferreira, F. (2010). Processing coordination ambiguity. Language And Speech, 53(4), 494-509. doi:10.1177/0023830910372499[[User:Mvanfoss|Mvanfoss]] ([[User talk:Mvanfoss|talk]]) 01:02, 15 March 2012 (UTC)
Engelhardt, P. E., & Ferreira, F. (2010). Processing coordination ambiguity. Language And Speech, 53(4), 494-509. doi:10.1177/0023830910372499[[User:Mvanfoss|Mvanfoss]] ([[User talk:Mvanfoss|talk]]) 01:02, 15 March 2012 (UTC)


---


“Integration of Visual and Linguistic Information in Spoken Language Comprehension”
“Integration of Visual and Linguistic Information in Spoken Language Comprehension”
Line 607: Line 632:
Tanenhaus, M.K., M.J. Spivey-Knowlton, K.M. Eberhard, J.C. Sedivy (1995). Integration of Visual and Linguistic Information in Spoken Language Comprehension. Science. Doi: 10.1126/science.7777863. [[User:Anelso|Anelso]] ([[User talk:Anelso|talk]])
Tanenhaus, M.K., M.J. Spivey-Knowlton, K.M. Eberhard, J.C. Sedivy (1995). Integration of Visual and Linguistic Information in Spoken Language Comprehension. Science. Doi: 10.1126/science.7777863. [[User:Anelso|Anelso]] ([[User talk:Anelso|talk]])


---
________________________________________________________________________


Grammatical and resource components of sentence processing in Parkinson’s disease
Grammatical and resource components of sentence processing in Parkinson’s disease
Line 618: Line 643:


Grossman M, Cooke A, DeVita C, Lee C, Alsop D, Detre J, Gee J, Chen W, Stern MB, Hurtig HI. Grammatical and resource components of sentence processing in Parkinson’s disease: an fMRI study. Neurology. 2003;60:775–781. Doi: 10.1212/01.WNL.0000044398.73241.13 [[User:Lcannaday|Lcannaday]] ([[User talk:Lcannaday|talk]]) 16:13, 15 March 2012 (UTC)
Grossman M, Cooke A, DeVita C, Lee C, Alsop D, Detre J, Gee J, Chen W, Stern MB, Hurtig HI. Grammatical and resource components of sentence processing in Parkinson’s disease: an fMRI study. Neurology. 2003;60:775–781. Doi: 10.1212/01.WNL.0000044398.73241.13 [[User:Lcannaday|Lcannaday]] ([[User talk:Lcannaday|talk]]) 16:13, 15 March 2012 (UTC)

_____________________________________________________________________________________
---


==Bilingualism==
==Bilingualism==


The Forgotten Treasure: Bilingualism and Asian Children's Emotional and Behavior Health (Wen-Jui Hann and Chien-Chung Huang) [[User:Katelyn Warburton|Katelyn Warburton]] ([[User talk:Katelyn Warburton|talk]]) 20:32, 8 March 2012 (UTC)
The Forgotten Treasure: Bilingualism and Asian Children's Emotional and Behavior Health (Wen-Jui Hann and Chien-Chung Huang) [[User:Katelyn Warburton|Katelyn Warburton]] ([[User talk:Katelyn Warburton|talk]]) 20:32, 8 March 2012 (UTC)

---


English Speech sound development in preschool-aged children from bilingual English-Spanish environments (Gildersleeve-Neumann, C. E., Kester, E. S., Davis, B. L., & Peña, E. D.) [[User:Kfinsand|Kfinsand]] ([[User talk:Kfinsand|talk]]) 07:36, 13 March 2012 (UTC)
English Speech sound development in preschool-aged children from bilingual English-Spanish environments (Gildersleeve-Neumann, C. E., Kester, E. S., Davis, B. L., & Peña, E. D.) [[User:Kfinsand|Kfinsand]] ([[User talk:Kfinsand|talk]]) 07:36, 13 March 2012 (UTC)


---
===Taking Perspective in Conversation: The Role of Mutual Knowledge in Comprehension===

Taking Perspective in Conversation: The Role of Mutual Knowledge in Comprehension
[[User:TaylorDrenttel|TaylorDrenttel]] ([[User talk:TaylorDrenttel|talk]]) 20:47, 13 March 2012 (UTC)
[[User:TaylorDrenttel|TaylorDrenttel]] ([[User talk:TaylorDrenttel|talk]]) 20:47, 13 March 2012 (UTC)


Line 641: Line 671:
Keysar, B., Barr, D.J., Balin, J. A., & Brauner, J. S. (2000). Taking perspective in conversation: the role of mutual knowledge in comprehension. Psychological Science, 11(1), 32-38. doi:10.1111/1467-9280.00211
Keysar, B., Barr, D.J., Balin, J. A., & Brauner, J. S. (2000). Taking perspective in conversation: the role of mutual knowledge in comprehension. Psychological Science, 11(1), 32-38. doi:10.1111/1467-9280.00211


---



Language mixing in bilingual speakers with Alzheimer’s dementia : a conversation analysis approach [[User:Amf14|Amf14]] ([[User talk:Amf14|talk]]) 03:01, 15 March 2012 (UTC)
Language mixing in bilingual speakers with Alzheimer’s dementia : a conversation analysis approach [[User:Amf14|Amf14]] ([[User talk:Amf14|talk]]) 03:01, 15 March 2012 (UTC)

Revision as of 18:09, 15 March 2012

Template:Course page/Tabs Please add your 500 word summaries in the appropriate section below. Include the citation information for the article. Each student should summarize a different article, so once you have chosen an article, I would recommend adding the citation with your name (type 4 tildes). That way others will not choose the same article as you. You can then come back later and add your summary.

Speech Perception

The Development of Phonemic Categorization in Children Aged 6-12 by Valerie Hazan and Sarah Barrett Lkientzle (talk) 15:38, 29 February 2012 (UTC)[reply]

In 2000, Hazan and Barrett sought to find evidence for development of phonemic categorization in children aged 6 to 12 and compare this to adult subjects. They wanted to test whether categorization is more consistent with dynamic or static cues, as well as how this changes if there are several cues available versus limited cues to signal the phonemic differences. For example, how well can a child distinguish /d/-/g/ and /s/-/z/ depending on the cues given, and how does this compare to how an adult does this task? The reason why this study was so important was because previous research had yielded contradictory results for the age in which children’s perception of phonemic categorization is at an adult level, and criteria and methods for testing this were inconsistent. This study is also important because it provides evidence that phoneme boundary sharpening is still developing well after the age of 12 into adulthood.

Previous research has repeatedly shown a developmental trend that as children grow older, they can better categorize phonemes into their respective categories consistently. The issue of at what age phonemic categorization becomes adult-like is still debatable, though. Some studies have not found significant differences between 7 year olds and adults in their ability to categorize (Sussman & Carney, 1989). Other studies have found the opposite results (significant differences between age groups) with virtually the same criteria (Flege & Eefting, 1986). The present study by Hazan and Barrett sought to re-evaluate these previous findings in a manner that was very controlled, and see if 12 year olds (the oldest of their participant pool, next to their adult control group) were performing at the level of adults signifying the end of this developmental growth.

The test was run with 84 child subjects, aged 6-12, and with 13 adult subjects that served as a control group. Each subject was run separately, and had to complete a two-alternative forced-choice identification procedure that contained different synthesized phoneme sounds. These phoneme sounds were presented on a continuum starting from one sound (/d/) to another (/g/). When a participant had accurately identified at least 75% of a correct phoneme; the next sound on the continuum was presented. This outline was adapted for four different test conditions that each tested different phoneme continuums (such as /s/-/z/) and either was presented as a “single cue” or “combined cue”.

The dependent variables of this study were the categories chosen by participants for the sounds they heard. The independent variables were then the different conditions such as: which phoneme continuum was used and if it was single or combined cue presentation. The combined cue condition varied from a typical presentation of the sounds by varying the contrasting cues by harmony.

This study found that, like their hypothesis presumed, children continue to develop their ability to categorize phonemes as they age, and this continues even after the child turns 12. The researchers controlled for any extraneous variables such as attention deficits in the children, language barriers, or hearing deficits as well. Previous research on young children have shown that humans are proficient at identifying categories by the age of three, but the present study indicates that this ability only grows with age, and becomes more competent with ambiguous-cue situations. The study, therefore, states that there is no reason to presume a child is as competent as an adult at making these distinctions by the age of 12 like some previous research had suggested.

This research is important because it indicates that although we seem to be born with an innate sense of how to process phonemes, and by an early age are quite good at it, we should not assume that a persons environment does not aid in the development of even more advanced capabilities in perception. It seems that we can “practice” this distinction and get better at it by being exposed to more instances that make us figure out how to categorize sounds to make sense of them.

---

The Role of Audition in Infant Babbling by D. Kimbrough Oiler and Rebecca E. Euers Amf14 (talk) 16:37, 21 February 2012 (UTC)[reply]

A number of questions have been raised about the importance of experience in learning to talk. It is possible that infants are born with built in speech capabilities, but it is also possible that auditory experience is necessary for learning to talk. Oller and Eilers proposed the idea that if deaf infants babble in the same typical patterns as regular infants, it would be evidence to support that humans are born with innate abilities to speak. In order to test this proposal, they needed to study what types of speech emerge at each stage of the first year of an infant’s life. By the canonical stage (7-10 months), infants generally utter sounds characterized by repetitions of certain sequences such as dadada or baba. Research has shown that deaf infants reach this stage later in life than regular hearing infants.

It has been moderately challenging to study deaf infants during the past because it is uncommon to diagnose hearing disabilities within the first year of a child’s life. It is also difficult to find deaf infants with no other impairments, who have had severely impaired hearing since birth and have been diagnosed within the first year of their lives.

In this experiment, 30 infants were analyzed, 9 of them being severely or profoundly hearing impaired. Each infant was measured in order to determine at what age they reached the canonical stage. The two groups were designated based on whether the infants were deaf or not. In both groups, infants babbling sequences were tape recorded in a quiet room with only the parent and the experimenter. The number of babbling sequences was counted by trained listeners for each infant. The listeners based their counting on 4 main criteria including if the infant used an identified vowel and consonant, the duration of a syllable and the usage of a normal pitch range. Vegetative and involuntary sounds such as coughs and growls were not recorded. The infants were prompted by their parents to vocalize their babbling while in the room. If they did not comply, or if the behavior was considered abnormal in comparison to their actions at home, they would reschedule the experiment.

Results showed that normal hearing infants reached the canonical stage of speech by 7-10 months. On the other hand, deaf infants did not reach this stage until after 10 months. When analyzing both groups at the same age, none of the deaf infants produced babbling sounds that could be qualified as being at the canonical stage. The hearing subjects were calculated at approximately 59 canonical utterances per infants. This is compared to the deaf subjects who babbled approximately 50 utterances, but 5-6 months later than the hearing subjects did.

Overall, hearing impaired infants show significant delays in reaching the canonical stage of language development. Oller and Eilers concluded this to be due to their inability to hear auditory speech. There is evidence to support the idea that hearing aids can assist infants in reaching babbling stages earlier. Completely deaf infants may never reach the canonical stage. Within the experiment, both groups of babies showed similar patterns of growls, squeals and whispers at the precanonical stage, but once the infant reached an age where language was to develop further, audition and modeling played a far more important role. This significantly leaves deaf children behind in the speech department.

Oller, D., & Eilers, R. E. (1988). The role of audition in infant babbling. Child Development, 59(2), 441-449. Doi:10.2307/113023

Amf14 (talk) 17:21, 28 February 2012 (UTC)[reply]

---

The Impact of Developmental Speech and Language Impairments on the Acquisition of Literacy Skills by Melanie Schuele

Previous studies have wrestled with the task of identifying speech/language impairments in children and determining the means by which they can be remedied. Language impairments are often precursors to life long communication difficulties as well as academic struggles. Hence, researchers past and present are focused on understanding speech/language impairments and finding solutions for children and adults alike. Schuele (2004) provides a review of previous studies that focus on differentiating and evaluating developmental speech impairments.

Individuals struggling with speech/language impairments are often referred to as language delayed, language disordered, language impaired, and/or language disabled. However, the review article defines and builds off of three key types: speech production impairments, oral language impairments, and speech production and oral language impairments. Furthermore, a distinction is made between developmental speech impairments: articulation disorders and phonological disorders. Articulation disorders have a motoric basis that result in difficulties to pronounce several speech sounds. For example, a child may substitute /w/ sounds for /r/; therefore, “rabbit” would sound like “wabbit.” Phonological Disorder (PD) is a cognitive-linguistic disorder that results in difficultly with multiple speech sounds and is detrimental to overall speech intelligibility.

Researchers distinguish between children with PD alone and children with PD + Language who are considered disabled based on their cognitive-linguistic abilities. In one study testing for reading disabilities, only 4% of the PD group showed a disability in word reading and 4% for comprehension. In contrast, within the PD + Language group 46% were classified as disordered in word reading and 25% were classified as disordered in reading comprehension.

A second study focused specifically on the differences between PD alone versus PD + Language. Children between the ages of 4 and 6 were assessed and evaluated upon their entry into third and fourth grade. Assessments revealed that PD +Language children had more severe speech deficits, low language scores, lack of cognitive-linguistic resources, and a family history of speech/language/learning disabilities compared to PD-alone children.

These studies highlight the importance of understanding and addressing speech/language difficulties in children. Children who struggle with a language condition, especially PD + language are at a very high risk for language impairment throughout childhood, adolescence, and potentially adulthood. Although this article did not focus on treatment, future research obstacles were outlined. The population challenge of testing preschoolers/early aged school children for language impairments results from a lack of reliable and valid material that can measure reading abilities and phonological awareness. In addition, children with language impairments spend more time trying to learn the basics of communication while their peer counterparts blaze ahead. The lack of cognitive-linguistic resources to devote to other tasks needs to be addressed when evaluating the efficacy of treatments.

Schuele, M. C. (2004). The impact of developmental speech and language impairments on the acquisition on literacy skills. Mental Retardation and Developmental Disabilities, 10, 176-183. Katelyn Warburton (talk) 20:52, 28 February 2012 (UTC)[reply]

---

Longitudinal Infant Speech Perception in Young Cochlear Implant Users

Much research has been done regarding speech perception in infants with normal hearing, especially in regards to phoneme discrimination in the first year of life. It has been shown that infants with normal hearing have surprisingly sophisticated systems of speech perception from the onset of life. This ability plays a critical part in the development of linguistics. Building on this fundamental concept of basic development of speech perception, Kristin Uhler and her colleagues set out to determine what the course of development would be for a child experiencing developmental challenges.

The present study is a case study exploring the development of speech perception in infants with hearing impairments who have received cochlear implants to aid their linguistic development. Specifically, the study aims to explore how speech perception began in children with new cochlear implants, if they are able to make discriminations in speech patterns, and how their development compares to a child with normal hearing. This research is of great importance because if children with cochlear implants can perceive speech in the same way as normal hearing children, they will be able to interact in a speaking world.

This study focused on case studies of seven children with normal hearing and three children with cochlear implants. Each child underwent speech perception testing in which they were asked to discriminate between two contrasting sounds. The number of sounds played as well as the difficulty was manipulated by the experimenters. They were ultimately measuring the number of head turns performed by the child based on the sounds played in the room. At the onset of the experiment, each child was placed in their caretaker’s lap. After hearing several initial and simple sounds, the children lost interest in the source of the sound. The child was then played slightly differing sounds and was conditioned to turn their head when they heard a difference. All testing took place in a room with double walled sound technology.

The results to these case studies revealed a great deal about the abilities of speech perception in children with cochlear implants. The first case study showed that prior to receiving a cochlear implant, no sounds were perceived to have been occurring in his environment. However, once the cochlear implant was activated, he was able to develop speech perception with accuracy of head turns slightly under that of a child with normal hearing. In the second case study, the child with the cochlear implant had even more promising success. After the implantation, he was able to discriminate many of the five core phoneme contrasts that each control child with normal hearing could discriminate. This child was almost able to normalize speech perception with the use of the cochlear implant except for the distinction between the phoneme /pa-ka/. The final case study showed complete normalization of speech perception with the use of a cochlear implant. This study also suggested that in both children with cochlear implants and normal hearing, vowels and voice onset time becomes prevalent in development before the ability to discriminate the place of articulation. These findings supported the predictions of the researchers.

Broader implications for this research include the application to the importance of linguistic development and phoneme discrimination in early infancy. These findings suggest that children with hearing impairments may be able to participate in this crucial development.

Uhler, K., Yoshinaga-Itano, C., Gabbard, S., Rothpletz, A. M., & Jenkins, H. (2011). Longitudinal infant speech perception in young cochlear implant users. Journal Of The American Academy Of Audiology, 22(3), 129-142. doi:10.3766/jaaa.22.3.2 Kfinsand (talk) 02:13, 29 February 2012 (UTC)[reply]

---

The Role of Talker-Specific Information in Word Segmentation by Infants by Derek M. Houston and Peter W. Jusczyk

When infants are introduced to human speech, they typically hear the majority of words in the form sentences and paragraphs, rather than single words. In fact, previous research found that only 7% of the speech heard by infants is considered to be in the form of isolated words. Although research proves that infants can identify single words produced by different speakers, Houston and Jusczyk aimed to find out if infants could identify these same similarities in the context of fluent speech.

For the initial study, 36 English-speaking 7.5-month-olds were presented with single words and full passages. The passages consisted of six sentences for each of four specific words (cup, dog, feet, bike). The participants were split into two groups: one heard Female Talker 1 first followed by Female Talker 2, and the other group heard the same speakers in the opposite order. Throughout the procedure, the infants’ head turn preference was measured.

The infants turned their head longer toward the familiar words presented by the second speaker, suggesting that infants could generalize these learned words across different speakers.

The second experiment also tested generalization of words across talkers, however the second speaker was replaced by an opposite sex speaker. In contrast with the initial study, the results showed no difference in head turn preference between the two speakers, indicating difficulty generalizing the words between the two speakers of different genders. Experiment three was aimed to mirror the methods and findings of the initial study by using two male speakers, instead of females. The results showed that infants were able to generalize across male speakers, just as they had across female speakers. In the fourth experiment, Houston and Jusczyk attempted to address the possibility that 10.5-month-olds might be able to generalize meaning across speakers of different genders. By replicating the second experiment, but instead using 10.5-month-old infants, they found that infants were able to generalize between speakers of different genders.

This study and the follow up experiments suggest that infants are able to generalize fluent speech between speakers, but only to a certain extent. While, 7.5-month-olds are able to generalize between two women and between two men, they are not able to generalize the fluent speech across genders. However, by the age of 10.5 months, the infants’ ability to generalize has increased and they are able to generalize between speakers of different genders.

Houston, D. M., & Jusczyk, P. W. (2000). The role of talker-specific information in word segmentation by infants. Journal Of Experimental Psychology: Human Perception And Performance, 26(5), 1570-1582. doi:10.1037/0096-1523.26.5.1570 Smassaro24 (talk) 06:53, 29 February 2012 (UTC)[reply]

---

Positional effects in the Lexical Retuning of Speech Perception by Alexandra Jesse & James McQueen Lino08 (talk) 15:01, 23 February 2012 (UTC)[reply]

In the melting pot that is American culture, people speak in many languages, accents, and dialects. It can be a challenge for listeners to always understand another person because pronunciation varies across speakers. Previous research has found that listeners use numerous sources of information in order to interpret the signal. People must also use their previous knowledge of how words should sound to help them acclimate to the differences in pronunciations of others. These ideas led the researchers to postulate that the speech-perception system benefits from all learning experiences because when word-specific knowledge is gained, understanding different talkers’ pronunciations from any position within a word becomes possible. The researchers followed up on this idea and tested whether having lexical knowledge from previously learned words still allows for the understanding of words when categorical sounds are rearranged.

In Experiment 1 of this study, the researchers created lists of 20 /f/-initial and 20 /s/-initial Dutch target words based on results of their pretest. They combined these 40 words with 60 filler words and 100 phonetically legal non-words. Ninety-eight Dutch university students with no hearing problems were chosen as participants and were randomly assigned to 1 of 2 groups. The /f/ training group was presented with 20 natural /s/ initial and 20 ambiguous /f/ initial words and vice versa. Both groups heard all 160 filler words. Participants had to quickly and accurately respond if the word they heard was a Dutch word or not. After the exposure phase, participants had to go through a test phase where they listened to /f/ and /s/ fricatives as either onsets or codas of words. They had to categorize as quickly and accurately as possible whether the sound they heard was an /f/ or an /s/. The independent variable was whether the participants were trained with ambiguous /f/ words or /s/ words, and the dependent variable was the reaction time in the test phase. In Experiment 2, word-final sounds were rearranged with syllable-initial sounds to test for the possible transfer of learning. The researchers keep the procedure the same in both experiments.

The results from the first experiment failed to show lexical retuning, meaning that they were not able to determine whether learning transfers across syllables in different positions. The results from the second experiment show that more [f] responses were given by the /f/ training groups than by the /s/ training groups which demonstrates that lexical rearranging and its transfer across different syllable positions. The researchers had mixed results with their findings relating to their expectations. In contrast to their hypothesis, the researchers found no evidence that lexical retuning occurs when ambiguous speech sounds are heard in the word-initial position. However, their findings did show that when sounds in different positions are matched acoustically, a person can generalize over the difference in position. The researchers concluded that retuning helps listeners recognize and understand the words of a speaker even if they have an unusual pronunciation.

Jesse, A. & McQueen J. (2011). Positional Effects in the Lexical Retuning of Speech Perception. Psychonomic Society, Inc. doi: 10.3758/s13423-011-0129-2

---

Influences of infant-directed speech on early word recognition by Leher Singh, Sarah Nestor, Chandni Parikh, & Ashley Yull. Misaacso (talk) 01:11, 29 February 2012 (UTC)[reply]

This study was done in hopes that knowledge could be gained regarding the influence of infant directed speech on long-term storage of words in a native language. Researchers wanted to know if the stimulus input style was influential on the capacity for long-term storage and ability to retrieve the information. It was important to discover if infant directed speech could have an influence on such aspects of word recognition before vocabulary production is evident in an infant.

When adults interact with infants, the speech used tends to be slower, have less sophisticated grammar, be composed of less content and is produced using a higher pitched voice. This child-directed speech, commonly termed infant directed speech, has also been shown in languages other than English. Previous research focused on phoneme perception, syntactic parsing, word segmentation, and boundary detection. Other research found evidence pointing to the ability of infants to generalize a novel talker regarding voice priming when the original and novel talker was producing test stimuli. Since past research did not cover the ability of infants to encode and retrieve words from their native language using infant directed speech and adult directed speech, a study regarding these abilities was prompted.

English-exposed infants of 7.5 months of age were exposed to either the stimulus of an adult using infant directed speech in the presence of the infant, or using adult directed speech toward another adult while the infant was absent. Listening time of passages when the familiarized word was in the infant directed speech condition, listening time of passages when the familiarized word was in the adult directed speech condition and listening time of passages where no familiarized word was present were being measured.

For each condition the infant would hear the words bike, hat, tree, or pear in various sentences. As the infants in the study sat on their care giver’s lap, flashing lights in front of the infant caused fixation which led to the center light being turned off and a light on either side of the infant to flash while the speech stimulus was presented. Familiarization occurred with both the infant and adult directed speech. The infants were tested 24 hours later to determine if the infants could recognize the words from the previous day.

The study concluded that infant directed speech is a key factor in recognizing words early in life proposing that although infants prefer this type of speech, it is even more beneficial as it can aid infants in retrieving and processing words. Infant directed speech also helps an infant generalize memory representations. Infant directed speech assists with storing words in the long-term and extending representation of words in the infant’s mind.

The conclusions of this research can prompt multiple directions of further inquiry. One such topic is of which attention-getting aspect of infant directed speech leads to the findings that were observed in this experiment. Another stem from this research is how words are associated with meaning to an infant as research has been completed for adults but not much is known on how this relates to infants.

Singh, Leher, Nestor, Sarah, Parikh, Chandni, & Yull, Ashley. (2009). Influences of infant-directed speech on early word recognition. Psychology Press, 14(6), 654-666. doi: 10.1080/15250000903263973 Misaacso (talk) 07:15, 1 March 2012 (UTC)[reply]

---

Early phonological awareness and reading skills in children with Down syndrome by Esther Kennedy and Mark Flynn

It is commonly known that individuals with Down syndrome are fully capable of acquiring reading skills. However, much less is known about the processes that lead to the development of their literary skills. Kennedy and Flynn are broadly looking at the literacy skills of the children with Down syndrome who participated in this study. They do so by picking apart the different levels of attaining literacy skills, specifically phonological awareness. The difficulty in studying this population is that the tests used to look at typically developing children must be adapted so that deficits in cognitive skills do not interfere with any of the areas they assessed. They adapted tasks to assess phonological awareness, literacy, speech production, expressive language, hearing acuity, speech perception, and auditory visual memory.

This study took place in New Zealand, and included nine children with Down syndrome. They were between the ages of five and ten, and all had at least six months exposure to formal literacy instruction in a mainstream school. Literacy teaching in New Zealand uses a “whole language” approach, and focuses on the meaning from the text. This means the children in this study had little to no history of phonologically based literacy instruction.

Because hearing impairment is prevalent in individuals with Down syndrome, hindering speech perception and auditory processing skills, an audiologist made sure the children could clearly hear throughout the study. To test short-term memory the subjects were asked to recall pictures they had studied of unrelated pictures of one, two and three syllables. To test speech production, the Assessment of Phonological Processing Revised (Hodson, 1986) was used to obtain a Percentage Consonants Correct from a list of 106 single words. To test expressive language a MLU was recorded of 50-100 intelligible utterances. They used two different methods to test reading. The first was the Burt Word Reading Test-New Zealand Revision (Gilmore, Croft & Reid, 1981). However, if the child was unintelligible, they requested a list of words the child could consistently read accurately. They also tested letter-sound knowledge, the children were asked to identify the letter that the investigator produced. The investigators divided the letters in an attempt to avoid misperceptions between letters that sound similar. They also avoided adding a vowel after voiced phonemes and lengthened them when possible. They used the example of “vvv” rather than “vuh.”

The results were correctly predicted by Kennedy and Flynn. Participants performed better on the tasks depending on how long they had been in school, and the tasks that used a spoken response were more difficult to score due to speech impairments. Participants with higher phoneme awareness skills had higher reading levels. However, only one participant was able to detect rhyming. This study looked solely at reading skills based on text decoding, not whether the participants were able to extract meaning from what they read. This study did not include a control group, and only had nine participants, which could have contributed to limitations.

Kennedy EJ, Flynn MC. Early phonological awareness and reading skills in children with Down syndrome. Down Syndrome Research and Practice. 2003;8(3);100-109. Lcannaday (talk) 00:48, 1 March 2012 (UTC)[reply]

---

Modified Spectral Tilt Affects Older, but Not Younger, Infants’ Native-Language Fricative Discrimination by Elizabeth Beach & Christine Kitamura

At birth, infants rely on basic auditory abilities to distinguish native and nonnative speech and up until 6 months of age prefer low-frequency infant-directed speech to adult-directed speech. They then begin to learn their native vowels and at 9 months also consonants. As infants’ ability to distinguish nonnative consonants decreases while their ability to distinguish native consonants improves they are said to go from a language-general to language-specific mode of speech perception. This led researchers Beach and Kitamura to find out how adjusting the frequency of native speech affects infants speech perception as they develop.

In this study, the ability of 6- and 9-month old infants to discriminate between fricative consonants /f/-/s/ at unmodified, high, and low frequencies was tested. 96 infants were assigned evenly to one of three conditions: normal speech unmodified, normal speech at a lower frequency, and normal speech at a higher frequency. The speech stimuli was four samples of /f/ and four /s/. Measures of overall duration and vowel frequency (F0) remained constant while measures of center of gravity and frequency of the second formant (F2) at vowel transition varied. Each infant was tested individually using a visual habituation procedure in which an auditory stimulus would be presented when the infant fixated on the display. Two no-change controls of the habituation stimulus were presented to ensure there was no spontaneous recovery. Control trials were then followed by two test trials, which alternated the test stimulus with the habituation stimulus.

Results showed that in the normal speech condition, regardless of age, infants increased their fixation durations in test trials compared with control trials. Both age groups showed evidence of discriminating /f/ versus /s/. In the low-frequency condition 6-month old infants had longer fixation periods than 9-month old infant. Both age groups discriminated /f/-/s/. In the high-frequency condition 6-month old infants showed a larger increase in fixation times. In addition, younger infants but not older infants were sensitive to fricative discrimination. 6-month old infants can discriminate /f/-/s/ regardless of speech modification but are best at unmodified or high-frequency conditions. In addition, 9-month olds could only discriminate /f/-/s/ under normal speech conditions or low-frequency conditions with their best performance in the normal conditions.

Based on acoustic modes of perception first used by infants, researchers predicted that amplifying a higher frequency would lead to an increased discrimination for both age groups. Results show evidence of this in 6-month olds but not 9-month olds. On a linguistic base, they predicted that 9-month olds would only be able to discriminate /f/-/s/ in the normal speech condition. The 9-month olds’ inability to discriminate high and low frequency conditions supports this.

This study will serve as a base for future research of speech perception in infants with hearing loss and bring us closer to providing infants with hearing loss the best amplification strategies to ensure best development of language skills.

Beach, E., & Kitamura, C. (2011). Modified spectral tilt affects older, but not younger, infants' native-language fricative discrimination. Journal Of Speech, Language, And Hearing Research, 54(2), 658-667. doi:10.1044/1092-4388(2010/08-0177)Mvanfoss (talk) 01:21, 1 March 2012 (UTC)[reply]

---

Maternal Speech to Infants in a Tonal Language: Support for Universal Prosodic Features in Motherese

Motherese, baby talk, and infant-directed speech are common words to describe the distinctive voice adults use when speaking to infants. Previous research identified that infant-directed speech has a unique acoustic quality or prosodic features. For example, a higher pitch and slower tempo, prosodic features, are consistently associated with motherese. Furthermore, this type of speech provides benefits related to language development for the infants. Since these results are so pervasive across English speaking mothers, DiAnne Grieser and Patricia Kuhl, attempted to test whether this prosodic pattern occurs across other languages. Specifically, they wanted to test tonal languages where a change in pitch alters the meaning of the word. This test will help determine if the pattern is universal.

In this experiment, there were eight monolingual women who spoke Mandarin Chinese and were mothers of an infant between six and ten weeks of age. Each woman was recorded as she spoke on the telephone to a Chinese-speaking friend or as she spoke to her infant that she held in her lap. The average fundamental frequency (FO), average pitch range for each sample recording, average pitch range for each phrase, average phrase duration, and average pause duration were recorded for the adult-to-adult (A-A) conversation and the adult-to-infant (A-I) conversation.

Overall, findings illustrated that fundamental frequency and pitch range, whether measured over the sample or individual phrases, significantly increase or shift upward when Mandarin mothers speak to their infants. In other words their pitch increases. Furthermore, the pause duration and phrase duration are altered when the mothers speak to their infants. They speak slower, shorten their phrases and increase the length of their pauses in comparison to speech directed at adults.

These results indicate that Mandarin motherese is very similar to English motherese. Therefore, the prosodic patterns (increased average pitch, lengthened pauses, and shortened phrases) in maternal speech to infants are not language-specific. This is a surprising result considering that the tonal language of Mandarin Chinese relies on changes in pitch to indicate word meaning. The question then arises whether or not a developmental change in Mandarin motherese must occur when infants approach the age of language acquisition in order for them to accurately understand the differences between words.

Since these findings are fairly robust, it is important to further understand the benefit this type of speech has on infants. More specifically, research should focus on the acoustic characteristics of motherese that capture the attention of infants. Research has identified that this universal languages exists, but now focus should turn to the purpose it serves.

Grieser, DiAnna L., & Kuhl, Patricia K. (1988). Maternal Speech to Infants in a Tonal Language: Support for Universal Prosodic Features in Motherese. Developmental Psychology, 14-20 TaylorDrenttel (talk) 01:28, 1 March 2012 (UTC)[reply]

---

Stuffed toys and speech perception

There is enormous variation in phoneme pronunciation among speakers of the same language, and yet most speech perception models treat these variations as irrelevancies that are filtered out. In fact, these variations are correlated with the social characteristics of the speaker and listener — you change the way you speak depending on who you're talking to. Now, recent research shows that these variations go beyond just speakers: listeners actually perceive sounds differently depending on who they come from. Jennifer Hay and Katie Drager explored how robust this phenomenon is by testing if merely exposing New Zealanders to something Australian could modify their perceptions.

Subjects heard the same sentences, with a random change in accent. The /I/ sound was modified to sound more like an Australian accent or like a New Zealand accent, and all subjects heard all variations. The only difference between the two groups was the type of stuffed animal present -- either a koala, for the Australian condition, or a kiwi, for the New Zealand condition. After hearing each sentence, participants wrote on an answer sheet if it sounded like an Australian speaker or a New Zealand speaker had read it.

When the participants listened to the sentences with a koala nearby, they tended to perceive them as being like an Australian accent, especially in the transitory sentences where the /I/ phoneme was indistinguishable from Australian or New Zealand accents. Similarly, when the kiwi was present, participants were more likely to perceive the sentences as sounding more like a New Zealand accent.

The researchers had originally but skeptical that these results could be obtained. Hay had previously performed a similar experiment, and the results from this study corroborated with those from before. This suggested to the researchers that invoking ideas about a particular region or social aspect can alter the way a sentence is perceived.

Hay, J., & Drager, K. (2010). Stuffed toys and speech perception. Linguistics, 48(4), 865-892. doi:10.1515/LING.2010.027 AndFred (talk) 03:12, 1 March 2012 (UTC)[reply]

---

Infants listenfor more phonetic detail in speech perception than in word-learning task by Christine L. Stager & Janet F. Werker Hhoff12 (talk) 04:00, 1 March 2012 (UTC)[reply]

---

Phoneme Boundary Effect in Macque Monkeys

There has been debate over what specific characteristics of language are unique to humans. A popular approach to investigating this topic is to conduct studies to test possible innate language processes in animals and then compare these results to human subjects. There has been previous research on the nature and origins of the phoneme boundary effect. Many of these studies are centered on speech and non-speech comparisons along with looking at the difference between human and animal subjects.

Prior to this particular study on macque monkeys, there were five studies that compared perception of speech sounds between the two subject groups of animals and humans. These studies concluded that certain nonhuman species are able to perceptually partition speech continua in the region already defined by human listeners. In addition, animal subjects are able to discriminate stimulus pairs from speech-sound continua. To add to the data that had already been gathered, Kuhl and Padden aimed to extend the research to voiced and voiceless continua in order to further investigate phonetic boundaries.

In the study, Kuhl and Padden used three macaque monkeys as their subjects. The subjects were tested on their ability to distinguish between pairs of stimuli with voice and voiceless properties (ba-pa, da-ta, ga-ka). The subjects were restrained in chairs during the testing and were delivered the audio signals by an earphone in the right ear. There was a response key located in front of the subject along with a green and red light that were used to train the subject to respond at the correct time and in the correct way. In addition, an automatic feeder that dispensed applesauce was used as a positive reinforcement throughout the study.

During the procedure there were two types of trials; the subjects were presented with stimuli that were the same and stimuli that were different. These trial types were run with equal probability. The subject was required to determine if the stimuli were the same of different. This was done by pressing the response key for the full duration of the trial if the two stimuli were the same, and release the response key if they were the same.

Kuhl and Padden found that the subjects were able to discriminate between sounds that were phonetically different significantly better than they discriminated between sounds that were phonetically the same. These results were consistent with the results found in human subjects, both adults and infants. Due to the similarities of these results, it can be suggested that the phoneme-boundary effect is not exclusive to humans. The results of this data brought up different issues involving innate language processes including the relevance of animal data to human data and the role played by auditory constraints in the evolution of language. Further studies will be necessary in order to determine how far these results can be applied to the overall evolution of language.

Kuhl, P.K., Padden, D.M., (1982). Enhanced discriminability at the phonetic boundaries for the voicing feature in macaques. Perception and Psychophysics. doi: 10.3758/BF03204208 Anelso (talk)

---

This article was about speech remaining the same even if the extinction of carnonical acoustic phonemes of the spectrum. A portion of this perceptual flexibility can be attributed to modulation sensitivity in the auditory-to-phonetic projection. Three tests were conducted to estimate the effects of exposure to natural and sine-wave samples of speech in this kind of perceptual versatility Sine-waves are defined as synthesizing the voice differently and also by deleting particular phonemes. The first experiment was labeled a bench mark of intelligibility of easy and hard sine-wave words. This initial procedure aimed to determine a baseline difference in recognition performance between easy and hard words using test items created by modeling sine-wave synthesis on natural samples spoken by a single talker. The experimenters believed that sine-wave speech would be the same as the talker speaking. They used two sets of seventy-two words (easy and hard). The words different in 3 characteristics: mean frequency of occurance, mean neighborhood density (these words were, also, spoken by a male with a headset). The participants were twelve, English speaking, volunteers recruited from the undergraduate population of Barnard College and Colombia University. In this experiment the participants were to listen to the words and write them down in a booklet (guessing was encouraged). The results showed better results with easy words (42%) and hard words (25%). The second experiment was labeled as a test of the effect of exposure to sine-wave speech. In this experiment they compared exposure to the acoustic form of the contrasts and the idiolectal characteristics of a specific talker. Three kinds of exposure were provided, each to a different group of listeners, preliminary to the word recognition test: (a) sine-wave sentences based on speech of the same talker whose samples were used as models for the easy and hard words; (b) natural sentences of the talker whose utterances were used as models for the sine-wave words, to provide familiarity with the idiolect of the sine-wave words without also creating familiarity with sine-wave timbre; and (c) sine-wave sentences based on natural samples of a different talker, to familiarize listeners with the timbre of sine-wave speech without also producing experience of the idiolect of the talker who produced the models for the sine-wave words. They used two kind of test materials: sentences used in an exposure interval, and easy and hard sine-wave words used in a spoken word identification test. The three exposures were tested as so:Same Talker Natural, was a set of seven-teen natural utterances produced by one of the authors, the same talker whose speech served as the model for the easy and hard sine-wave words, a second set of exposure items, Same Talker SW, was a set of 17 sine-wave sentences, and the third set of exposure items, Different Talker SW, were 17 sine-wave sentences modeled on natural utterances spoken by one of the researchers. The participants were thirty-six volunteers were from the undergraduate population of Barnard College and Columbia University. Randomly assigned to 3 different groups of 12 listeners. The subjects were told to listen to a sentence 5 times (1 second between sentences and 3 seconds between trials). Following this portion of the test there were an identification of easy and hard words. The subjects were supposed to answer the phrases in a booklet. The results were the sentence transcriptions were scored and performance was uniformly good, with natural sentences transcribed nearly error-free and sine-wave sentences identified at a high performance level despite a difference between the talkers (Same Talker SW = 93% correct, Different Talker SW = 78% correct). Each of the 34 sine-wave sentences was identified correctly by several listeners. To summarize the results, easy words were recognized better than hard words in every condition; exposure to natural sentences of the talker whose utterances were used as models for the sine-wave words did not differ from no exposure, nor did it differ from exposure to sine-wave speech of a different talker. Recognition improved for easy and for hard words alike after exposure to sine-wave speech produced by the talker who spoke the natural models for the sine-wave words. The third experiment was labeled as a control test of uncertainty as the cause of performance differences in easy and hard sine-wave words This was a test to estimate residual effects on recognition attributable to the inherent properties of the synthetic test items themselves by imposing conditions that eliminated the contributions of signal-independent uncertainty by using a procedure to eliminate signal-independent effects on identification due to uncertainty, this test exposed any residual signal-dependent differences in word recognition caused by errors in estimating spectrotemporal properties when creating the sine-wave synthesis parameters They used the same easy and hard sine-wave words from the first experiment. Except this time they were arranged differently so that some started with the same letters and some ended with the same letters. There were twenty-four volunteers were recruited from the undergraduate population of Barnard College and Columbia University. Randomly assigned to 2 groups of 12. These participants were given 140 trials of words and wrote them down in a booklet (encouraged to guess). The results showed that between the two different groups (same beginning/ending and no similarities) they both scored close to the same. Even though there was no difference, the results were extremely well among the easy and hard words. Approximately 88 percent of the words were recalled correctly. The discussion concluded that a listener who has accommodated this extreme perturbation on the perception of speech expresses the epitome of versatility, and the three tests reported here aimed to calibrate the components of this perceptual feat by assessing signal-dependent and signal-independent functions.

Remez, Robert E.; Dubowski, Kathryn R.; Broder, Robin S.; Davids, Morgana L.; Grossman, Yael S.; Moskalenko, Marina; Pardo, Jennifer S.; Hasbun, Sara Maria; Journal of Experimental Psychology: Human Perception and Performance, Vol 37(3), Jun, 2011. pp. 968-977. Gmilbrat (talk)

---

Miller, Joanne L,; Mondini, Michele; Grosjean, Francois; Dommergues, Jean-Yves; Language and Speech: Dialect Effects in Speech Perception: The Role of Vowel Duration in Parisian French and Swiss French, Vol 54(4), p. 467-485. Sek12 (talk)

The experiments of this article ask the question of how native Parisian French and native Swiss French listeners use vowel duration in deciding what the contrast is between a short o and a long o in the words cotte and cote, in the French language. They wanted to see whether or not the listeners could perceive the difference between the words in both their native and "abstract" French language.

This research question is important because it is trying to answer whether or not the vowel duration and vowel contrast of the two vowels used in the experiments are noticeably perceivable between Parisian and Swiss French dialects, which are almost identical.

Previous research on this topic, also done by the same authors, used only vowel duration as the indicator of vowel identification. This previous research found that vowel duration played a much more important role in Swiss French than in Parisian French. Parisian French listeners identified the vowels only using spectral information while the Swiss French listeners used both spectral information and vowel duration to identify the vowels presented to them. The current study works to investigate deeper into the dialect difference that is present between the vowels in the study(a short /o/ and a long /o/) and the way the Parisian and Swiss French listeners perceive the difference between those vowels in words.

In Experiment 1 of this study the researchers created four speech series to find the best exemplars of vowel duration in both native languages. Two of the series were based on the language of Parisian and two were based on that of the Swiss French. Each of the four series used the word cotte with a short vowel and one with a long vowel and also included cote with the same differentiation. The variable measured in both studies was the vowel duration difference between the two native languages.

In Experiment 1 and 2 the procedure was the same. Sixteen native Parisian French and sixteen Swiss French participants were chosen. Four series of stimuli were created to be used in the study. Each series consisted of short and long duration vowels in the words cotte and cote and were based on the natural speech of both groups. All the participants in the study took part in two separate sessions. Each of these sessions entailed three parts: familiarization, practice, and test. In the familiarization phase the listeners were presented with stimuli and rated those stimuli on a scale of 1-7(1 being a poor exemplar and 7 being the best fit exemplar). No results were taken from the familiarization phase. In the practice phase the participants were presented with the same stimuli as they would be in the test phase, in random order. The last part of the experiment, the test phase, the participants were presented with 14 blocks of stimuli, giving a rating on the vowel duration.

The results indicate that Swiss French listeners judged that the longer vowels were the best exemplars of the short /o/ and long /o/ when they listened to the Swiss French series and the Parisian French series. For both Parisian and French Swiss listeners the best exemplar was judged as being the long vowel variation of the words used in the study. Both groups showed sensitivity to the vowel duration for both languages in both the short vowel /o/ and the long vowel /o/. The researchers expected that only a small range of vowel durations would be perceived by the listeners as good exemplars and they were correct in this expectation.

The conclusion of this study tells us that "taken together, the analyses indicate that, overall, short /o/ and long /o/ vowels are differentiated by duration in both dialects, but that the difference between the two vowels is greater in Swiss French than in Parisian French, owing to a longer /o/."

Word Processing

________________________________________

Rayner, K., Slattery, T. J., Drieghe, D., & Liversedge, S. P. (2011). Eye movements and word skipping during reading: Effects of word length and predictability. Journal Of Experimental Psychology: Human Perception And Performance, 37(2), 514-528. doi:10.1037/a0020990

Lkientzle (talk) 04:25, 8 March 2012 (UTC)[reply]


The goal of the present study was to determine if word length and word predictability, based on previous context cues, would have a significant effect on how long a person fixates on a target word. Eye tracking devices were used to explore these questions and have been used in previous studies on word-processing to analyze the patterns subjects use to gather meaning from a string of words. Eye fixation time is the amount of time a person keeps their eyes fixated in one specific place, usually inferring more processing is needed for that word. The research also looked at how these variables (both predictability and word length) affected a participant’s likelihood to skip over the target word. Although past research has looked at these two factors separately (predictability of word on fixation time and length of word on fixation time), the present research hoped to combine these two elements to see if predictability of target word combined with varying word lengths affected fixation times.


Previous research on this correlation was attempted once, but encountered ceiling effects due to word length choices; namely, predictable two-letter words were skipped 79% of the time (Drieghe, Brysbaert, Desmet, and De Baecke, 2004). To correct this for the present study, three categories of word lengths were chosen: short, medium, and long, to control for this ceiling effect. The independent variables (IVs) for the present study would then be: word length (short, medium, and long) and word predictability (high vs. low). The dependent variables would be how long a person fixated on a word as a result of the IVs and their probability to skip over the target word as a result of the IVs.


Participants were asked to read a sentence silently that was presented on the screen in front of them while connected to an eye-tracking device. The sentences were randomized and after every third question a comprehension question was asked to insure meaning was understood. The sentences presented varied their target words for the conditions stated above (i.e- high probability with short length word, low probability with long length word…etc.)


Main results from this study indicated that word predictability significantly effected how often a word was skipped and the length of fixation time on the targeted word. Word length had some effects on fixation time and amount of target words skipped, but not enough to be significant. Unpredictable words had longer fixation times in all word length conditions. Predictable words had the most skips across all conditions.


Although researchers were hoping to find a significant correlation between word length and fixation times/skipping probability, this study still lends a hand towards future studies because it was the first to demonstrate skipping rates for long words. This is interesting because the long words (10 letters or longer) extended beyond the limits of the human identification span, meaning that the participant skipped it most likely due to partial information available that allowed for the skip. This finding could lend a hand towards research on the process of skipping longer words, and how we can still generate meaning with out the word.


Lkientzle (talk) 06:06, 8 March 2012 (UTC)[reply]

---

Emotion Words Affect Eye Fixations During Reading (Graham G. Scott, Patrick J. O'Donnell, and Sara C. Sereno) Katelyn Warburton (talk) 21:49, 28 February 2012 (UTC)[reply]

Previous research has evaluated the influence of “emotion words” on arousal, internal activation, and valence (value/worth). There is little disagreement that a reader’s response to emotion words can influence cognition, but physiological, biological, environmental, and mental influences remain understudied. This study evaluates the effect emotionality can have on lexical processes by tracking eye movements during fluent reading.

48 native English speaking participants with uncorrected vision were asked to read from a computer screen (ViewSonic 17GS CRT) while their right eye movements were monitored (by a Fourward Technologies Dual Purkinje Eyetracker). Arousal and valence values as well as frequencies for words were obtained and the values were averaged across categories. 24 sets of world triples including positive, negative, and neutral emotion words were presented to participants, with the target emotion words in the middle of the sentence. Participants were told that they would be asked yes and no questions after they read the sentence to ensure they were paying attention. After they read the sentence and answered the question, they were instructed to look at a small box on the screen while the tracker recalibrated. This occurred through all 24 trial sets.

In order to verify the plausibility of test materials, three additional tests were conducted utilizing different participants than the initial study. The first sub-study involved 18 participants who rated the plausibility of each emotion word appearing in a sentence. The second involved a similar task—participants were asked to rate the plausibility of an emotion word, but made the judgment from a sentence fragment not an entire sentence. Finally, 14 different participants were given a statement and asked to generate the corresponding emotion word. These three norming studies verified that the emotion words being used in the central study were plausible without being predictable.

This is the first study to analyze single emotion words in the context of fluent reading. Researchers found that participants had shorter fixation rates on positive and negative emotion words than neutral words. In addition, the influence of word frequency on fixation was facilitated by arousal levels. More specifically, low frequency words were facilitated by high levels of emotional arousal, either positive or negative. Therefore, emotional biases and word frequencies influence eye fixation while reading. The results of this study were consistent with previous research on emotion word processing, and furthered past studies by evaluating emotional word processing during fluent reading. In short, this study shows evidence of the important role of emotion in language processing. More specifically, that the emotional nature of a word—defined by arousal and valance characteristics—affects lexical access and therefore influences information processing. By following eye movements researchers were able to identify the rate at which words are recognized. This demonstrates that word meanings are activated and integrated quickly into reading context.

Scott, G. G., O'Donnell, P. J., & Sereno, S. C. (2012). Emotion words affect eye fixations during reading. Journal of Experimental Psychology: Learning, Memory, and Cognition, doi: 10.1037/a0027209

---

The Structural Organization of the Mental Lexicon and Its Contribution to Age-Related Declines in Spoken-Word Recognition Amf14 (talk) 04:22, 29 February 2012 (UTC)[reply]

Older age groups generally have a more difficult time with speech processing in comparison to younger groups of people. This was originally thought to be due to the loss of hearing that people of growing age experience. Recently, studies have attempted to show that factors other than hearing loss can attribute to difficulties with processing. These experiments have shown how a reduction in one’s cognitive abilities can greatly damage the processing of spoken language.

The English vocabulary is immense and therefore when accessing a specific word, the brain must activate the meaning in order to interpret and understand what is being said. A model known as the Neighborhood Activation Model suggests that words that are similar phonetically are all activated within the brain upon hearing a word. Words are recognized based on two important characteristics; density and frequency (Luce, 1986). Density refers to how many similar neighboring words there are and frequency indicates how often the word is used in comparison to the other words within their neighborhood. As the input of language continues, words are inactivated and ruled out until one meaning can be concluded.

According to experiments done by Luce, “hard” words, with high frequency and high density neighborhoods, will be more difficult to recognize. The reasoning behind this idea is that when words come from very dense neighborhoods, all of the similar words are activated as possibilities as well. This leads to a slower processing of the word that was initially spoken because a person is unsure of which word to continually activate and recall. Also, processing the mass amounts of spoken language is costly to a person’s cognitive resources. Age related declines in cognitive abilities should then affect one’s abilities to process hard words.

Experiment 1 desired to compare young adults with an elderly population. Each group heard a series of 75 easy and 75 hard words spoken by a male or female speaker. The difficulty of the words was based on the density and frequency of neighborhoods. The listeners were asked to write down the word that they heard and were given full credit if it was an exact match. Results reveal that older adult performance is influenced much more by the difficulty of words. The scores calculated for easy words were comparable between the two groups. The influence of hearing loss was measured but not found to impact the task at hand. An additional experiment was conducted in attempt to rule out extra factors in experiment 1. Questions were raised about how demanding the task was of older adults and to further test this idea, participants from the same population were presented with the same task but in the presence of white noise. Within the older group, the differences in performance between easy and hard words should remain if in fact the poor performance was due to factors other than task difficulty. As predicted, older participants in this experiment consistently had worse performance for words compared to the young group. They also continued to perform better for easy words. These findings suggest that the discrepancies in accuracy cannot be contributed completely to task difficulty, but it is possible that older listeners struggle with isolating specific sounds within words that have similar counterparts within the brain.

A final component of speech processing needed to be analyzed. Experiment 3 was designed to determine the amount of processing resources available at old age. When listening to a multitude of different speakers, the variability of speech signals is increased. Therefore, more cognitive resources are used in order to decipher between the many different acoustic sounds. This study does a better job of simulating real world contexts due to the many different voices surrounding a person. The manipulated factors now included easy and hard words, plus single voices versus multiple voices. Findings from experiment 1 were reinforced and word recognition was also significantly reduced when older adults hear many different speakers. This final experiment disclosed that all age groups have to use more cognitive resources when processing words with multiple speakers, but older listeners are at a greater disadvantage.

As age increases, adults suffer from reductions in cognitive abilities and processing. This leads to lags in the time it takes for them to understand language being spoken. Other factors that also decline with age contribute to these demands being put on them as well. This includes eye sight, hearing, processing and more.

Amf14 (talk) 19:52, 6 March 2012 (UTC)[reply]

Sommers, M. (1996). The structural organization of the mental lexicon and its contribution to age-related declines in spoken-word recognition. Psychology And Aging, 11(2), 333-341. Doi:10.1037/0882-7974.11.2.333

---

Evidence for Sequential Processing in Visual Word Recognition (Peter J. Kwantes and Douglas J. K. Mewhort)

When reading a word, there can be many possible candidates for what the word will end up being but at a certain point, the uniqueness point (UP), only one possible option remains. Previous research by Radeau et al. tested the ability to encode words sequentially using UP, in terms of the position of the letter that signified the uniqueness point of a word. The study was to determine whether the UP followed a pattern such as speech recognition in which latency of words with an early UP was faster compared to a late UP, however the test showed opposite results. In an effort to explain these mixed results, Kwantes & Mewhort redefined the study using an orthographic uniqueness point which distinguishes a word when reading from left to right. The study aimed to determine whether words with an early-OUP could be identified faster than those with late-OUP.

The initial study involved twenty-five undergraduate students who were asked to name a series of seven-letter words aloud, as quickly as possible, while they were presented visually in a sequence on a screen. Half of the words had an OUP with Position 4 (early-OUP) and the other half had an OUP with Position 6 or 7 (late-OUP). The response time (RT) was measured from the onset of the word until the voice response began.

The reaction time results showed a clear advantage for early-OUP words, which were on average 29 ms faster than late-OUP words.

The second study aimed to measure whether the results in Experiment 1 truly reflected a process of production and pronunciation, or whether it depended on reading processes instead. Experiment 2 used a similar procedure to the previous study, but asked participants to read the word silently and then name it out loud when cued to do so. An early-OUP advantage was not detected when naming was delayed by a cue, suggesting no interaction with output or production processes. The third study also repeated Experiment 1, but instead took away the visual word stimulus after 200 ms, in order to focus on the effect of eye movement. Results for Experiment 3 showed similar results as Experiment, with an early-OUP advantage, suggesting that the early-OUP advantage is not a result of eye movement.

The three experiments suggest that the early-OUP advantage in word processing is a result of retrieval from the lexicon, without interference of reading processes or eye movement. The results affirm the predictions of the researchers and present possible reasons for Radeau et al.’s failure to find early-UP advantages. Orthographic uniqueness points display the important role of lexical processes within word recognition.

Kwantes, P. J., & Mewhort, D. K. (1999). Evidence for sequential processing in visual word recognition. Journal Of Experimental Psychology: Human Perception And Performance, 25(2), 376-381. doi:10.1037/0096-1523.25.2.376 Smassaro24 (talk) 16:22, 3 March 2012 (UTC)[reply]

---

Syllabic Effects in Italian Lexical Access

Past research has explored the role of syllables in speech perception in depth. However, there are still large gaps remaining in relation to the function of small syllable units in the ability to identify a word in early processing. Research has in part focused on the syllabic hypothesis, stating that syllables are ordinary units in the processing of speech. It is known that in languages such as English one syllable is often enough to trigger lexical access and therefore recognize a word before it has been completely heard. It has not yet been determined if these effects are similar in other languages, such as those with Romantic qualities.

In the present paper, Tagliapietra and her colleagues aim to determine if these effects are synonymous in the Italian language. The researchers set out to determine if in the Italian language the first sounds of a syllable make contact with the mental lexicon. Specifically, Tagliapietra et al. explore whether this access to the mental lexicon depends on the point of the syllable that is stressed. This research is of great importance as it allows speech perception to be understood cross culturally. The present article features two experiments aiming to clarify this issue. In the first experiment, forty-two undergraduate Italian speaking students participated. They were asked to respond to word syllables determining if they could determine the word by only hearing the first syllable. Participants were first randomly assigned to two conditions of differing syllable structures. Next, they were either assigned to a condition where the word following the priming syllable was related to the syllable and on in which the word was not related. These two factors, syllable structure and the target word, were the two independent variables. Participants were then tested on the rate at which they could recognize a word only from the syllable. The dependent variable, then, was the time it took participants to respond. In the second experiment, the procedure followed the same procedure. However, there was only one independent variable which was the relationship between the priming syllables and the target words. This was done to determine if a fragment shorter than a syllable can establish the same contact with the lexicon.

Results from experiment one indicated that people can make a decision, or contact with the lexicon, about what a word is after only hearing a syllable regardless of whether stress is applied to the syllable. People also responded more quickly when the word following was related to the syllable than when it was completely unrelated. Results of experiment two found that small fragments can also be used to connect to the mental lexicon. Ultimately, the findings do not support the view that in Romance languages, such as Italian, fit within the syllabic theory. Although the results do not completely support this theory, they create an altered version of the same theory stating that not until later in the process of recognition is the set of lexical candidates reduced in Romantic languages.

Implications of this article allow readers to deepen their understanding of speech perception cross culturally. It also aids in understanding the complexities of human speech perception and how this can vary based on minute differences across languages.

Tagliapietra, L., Fanari, R. R., Collina, S. S., & Tabossi, P. P. (2009). Syllabic effects in Italian lexical access. Journal Of Psycholinguistic Research, 38(6), 511-526. doi:10.1007/s10936-009-9116-4 Kfinsand (talk) 06:22, 6 March 2012 (UTC)[reply]

---

Everyday, people say sentences that activate part of the brain. In these sentences we use contextual information that will help reactivate specific words to gain meaning behind the sentence. In this specific article, the experimenters were curious of the certain activation type of the word processing. Specifically, they were concerned with the entities of the words vertical space (ex. roof vs. root). In the first experiment, the participants performed a lexical decision task with words that had an up or down location (ex. eagle vs. worm). If reading the word activates whether it’s up or down, then compatibility occurred. They used thirty-six right-handed German native speakers (with an exclusion of two of the participants because of very low accuracy rates). They used seventy-eight German nouns that had a connotation of being either up or down (thirty-nine words up and thirty-nine nouns down). While these words were being stated, the participants were recorded with a frequency device from University of Leipzig. During the procedure the participants were given a noun and a pseudo word. They were to distinguish in the first half whether it was up or in the second half whether it was down. They found that the responses in the upper location were faster than in the lower location. However, both locations were significant, which means that location is remembered by the contextual information. In the second experiment, the experimenters used the same lexical decision task with the same words, except the words corresponded to the font color. They used twenty-four German native speakers (with an exclusion of one of the participants because of very low accuracy rates). They used the same words but with four different colors (blue, orange, lilac, and brown). There was a significant interaction between response location and referent direction. The responses were also significantly faster with compatible words than incompatible words. These results state that there is no prerequisite for location during word processing. In the third experiment, the experimenters used filler words that have no word location to throw off the naïve participants who believe they are being manipulated. Twenty-four German native speaking participants were used. Thirty-nine filler words were also added to the list from experiment one. This showed that the location of the word and the correctness of the word were faster than if they were false. In the last experiment, the experimenters wanted to know how fast they could push the button to its correlating location. The participants were told to keep their hands stationary on the up or down buttons. There were twenty-four German native speaking participants (one was excluded due to low accuracy). The experiment was exactly designed like experiment two. Like the other experiments, the observers found a compatibility effect. Also, the participants were faster at hitting the up button than they were for the down button. These four experiments strongly concluded that information concerning a referents location of the word is automatically activated when a participant process object nouns.

Lachmair, Martin; Dudschig, Carolin; De Filippis, Monica; de la Vega, Irmgard; Kaup, Barbara; Psychonomic Bulletin & Review, Vol 18(6), Dec, 2011. pp. 1180-1188. Gmilbrat (talk)

---

Semantic processing and the development of word recognition skills: Evidence from children with reading comprehension difficulties

Previous studies had acknowledged the importance of phonological awareness in language and reading acquisition. However, many of these studies ignore the semantic aspects of learning to read. This study was done based on the concept that reading comprehension involves two skills, decoding ability and linguistic comprehension. This study aimed to look specifically at children who have reading comprehension difficulties to test the importance of semantic processing in reading acquisition.

The two predictions tested in this study were that children with specific reading comprehension difficulties show impairments in semantic processing, but not phonological processing. Also, it was predicted that differences in semantic processing skills would influence word recognition abilities. The predicted this because if poor comprehenders still have good decoding skills, they would be able to easily read words they hear often or are spelled normally. However, they would have more difficulty reading low-frequency exception words.

The children were all between 8 years, 6 months and 9 years, 6 months and attended the same school. They were matched for their age, as well as non-verbal and decoding abilities. All the children were tested according to the Neale Analysis and were considered at least age-appropriate in reading accuracy. Decoding ability was tested using the Graded Nonword Reading Test. To test semantic difficulties, the children took the Test of Word Knowledge, which tests receptive and expressive vocabulary.

This study consisted of three experiments. In the first experiment, abilities to access semantic and phonological information were tested. For semantic information, they used a task determining if words word synonyms. They determined if two spoken words rhymed to test phonological processing. The results showed that children with poor comprehension skills performed more slowly and made more errors on the synonym task, but performed similarly to the control group in determining rhyming words. This would suggest that they have trouble with semantic but not phonological processing.

The second experiment tested semantic and phonological processing as well, but did so using verbal fluency. Instead of simply identifying if words are synonymous or rhyme they had to produce them. To test semantics, the children were given spoken categories, such as animals, and given 60 seconds to generate as many examples as they could. To test phonological processing, children were given words and asked to come up with as many words that rhymed with them as possible in 60 seconds. The results of this experiment were consistent with the first experiment, showing more difficulty in accessing and retrieving semantic information in children with poor comprehension skills. It is possible that their difficulty in these single-word semantic tasks contribute to the comprehension difficulties.

The third experiment looked at whether these semantic difficulties would also affect word recognition. The children were asked to read words that varied in their frequency and spelling regularity. The expected to see the children with poor comprehension skills to have more difficulty reading irregular and low-frequency words because they require more than just decoding, but also semantic processing. They did see word-recognition weaknesses in children with comprehension difficulties compared to their controls, specifically in low-frequency and exception words. However, a three-way interaction between reader group, frequency, and regularity was not significant. This could be because the subjects were too young to have fully developed word-recognition skills, or that because of their age they might not have been exposed to high frequency words as much and not think of them as such. This study does show that children with poor comprehension skills had more difficulty with semantics, and semantic abilities play a role in word recognition.

Nation K., Snowling M. J. (1998). Semantic processing and the development of word recognition skills: Evidence from children with reading comprehension difficulties. Journal of Memory and Language, 39, 85–101. Lcannaday (talk) 16:16, 8 March 2012 (UTC)[reply]

---

Speaker Variability Augments Phonological Processing in Early Word Learning by Gwyneth Rost and Bob McMurrayMisaacso (talk) 20:02, 7 March 2012 (UTC)[reply]

Infants have difficulty learning phonologically similar words which is shown in a switch task. For this task, infants are habituated with two objects that are paired to two words and are then tested in two ways. The first test takes the paired objects and words. Infants see the pairing that was used in habituation. On the switch trials, the infants saw an object with a word that was not together during habituation. If infants have actually learned the words, the mismatch should cause them to be no longer habituated to the pairings. This task assesses if an infant has learned a pair of words to an extent for the child to be surprised by differing the word pairing. Past research has discovered that while using the switch task, 14 month olds notice misnaming when words are different when there are multiple phonemes but not if it is a variance of a single phoneme. Learning a word demands multiple abilities of cognition and perception including attention, memory and inductive thinking. When children hear a non-word that sounds similar to a known work, the known word is partially activated. In 17 month olds, there is a correlation with an ability to successfully complete the switch task and the size of the lexicon.

Phoneme discrimination is a component of learning words and these abilities continue to develop while cognitive capacities increase, making it so lexical similarities cause less difficulties. Mispronouncing words is normal as shown with infant directed speech as it has a larger range for possible production when compared with adult directed speech. Studies using visual category learning have shown that infants can distinguish individual examples if they are trained on single pattern tests. If infants are trained using multiple patterns, they are able to differentiate what belongs to specific categories. This study used the switch task that has been used in previous research but with a few alterations. A novel object was introduced at the end of the test, real objects of one color were used instead of multi-color objects, photographs were used instead of moving film, and two of the original words used were replaced to make the learning situation easier.

The infants were presented with three photographs of single-colored objects and two sound files with a female voice reciting /buk/ and /puk/ at intervals of two seconds. The picture and sounds were put together so they appeared at the same time. Infants were habituated to the stimulus and looking time was recorded. After habituation infants were tested on same trials which were the same as the habituation task, switch trials where the object was paired with the opposite word, and novel trials where an object that the infant had never seen was paired with one of the stimulus words.

In experiment one, researchers found that infants were not able to distinguish between lexical neighbors when they were paired with the various visual stimuli. It was found that infants do learn during this task as they became no longer habituated to new visual stimuli. In experiment two, researchers modified the auditory stimulus by splicing the /buk/ /puk/ recordings of eighteen adults. In the experiment each trial had multiple groupings of each word from the different adult speakers. The test condition, whether same, switch, or novel was the independent variable with the looking times being the dependent variable. The multi-pattern switch task gave infants enough information to successfully complete the switch task.

Rost, C. Gwyneth and McMurray, Bob. (2009). Speaker variability augments phonological processing in early word learning. Developmental Science, 12(2), 339-349. doi: 10.1111/j.1467-7687.2008.00786.x Misaacso (talk) 06:46, 8 March 2012 (UTC)[reply]

---

Morphological Awareness: A key to understanding poor reading comprehension in English TaylorDrenttel (talk) 20:14, 7 March 2012 (UTC)[reply]

Reading ability in elementary school children is an extensively researched topic. At this age children are learning a vast amount of vocabulary and developing their reading and writing skills. This particular study sought to examine several reading-related abilities of children in Grades 3 and 5. More specifically, they tested how morphological awareness affects reading comprehension in children that are already below average in comprehension and when this weakness emerges.

Previous research has linked morphological awareness to success in word reading and reading comprehension. Furthermore, there is evidence that the role of morphological awareness is more pertinent to children’s reading development in later elementary grades. The authors hoped to solidify this link and determine when the weakness in morphological awareness might emerge.

This longitudinal study categorized three groups of Grade 5 children into three levels of comprehenders based on a regression equation that compiled age, word reading accuracy, word reading speed, and nonverbal cognitive ability. The three groups were unexpected poor comprehenders, expected average comprehenders, and unexpected good comprehenders. The three groups were similar in nonverbal ability and word reading accuracy, but differed in reading comprehension. Children were administered reading or oral tasks that tested reading comprehension, word identification, word reading efficiency, nonverbal ability, vocabulary knowledge, naming speed, phonological awareness, orthographic processing, and morphological awareness. With regards to morphological awareness, children were given a word analogy that consisted of 10 inflectional and derivational items. The child was to say the word that matched the pattern. The tests were given twice each year.

The groups did not differ in phonological awareness, naming speed, and orthographic processing, but there were significant group differences in morphological awareness. At Grade 3 the groups slightly differed in morphological derivation but not inflection. Overall Grade 3 children performed less well than Grade 5 children. At Grade 5, unexpected poor comprehenders performed significantly less well than the two other groups. Again, this was specific to morphological derivation and not inflection. As the researchers predicted morphological awareness is associated with reading comprehension; however, they extended previous findings to show that these children show adequate orthographic, phonological awareness, and naming speed skills.

This study is the first to give an understanding of when morphological difficulties emerge in poor comprehenders. This knowledge is critical for schoolteachers of third, fourth, and fifth grade students. Greater emphasis should be placed on morphological forms, specifically derived ones, in order to boost the reading comprehension of students.

Tong, X., Deacon, S., Kirby, J. R., Cain, K., & Parrila, R. (2011) Morphological Awareness: A key to understanding poor reading comprehension in English. Journal of Educational Psychology, 103(3), 523-534. Doi:10.1037/a0023495

---

Foveal processing and word skipping during reading by Denis Drieghe

Previous research has generated models which all include the assumption that word skipping during reading is closely linked to the amount of parafoveal processing during the preceding fixation. The fovea is the center of the retina and provides focused vision or an attentional beam, the parafoveal region is then the area around this. According to our assumption of word recognition, the word under the attentional beam is the only word being processed. It has been shown that while the attentional beam may remain on a word (n), processing in the parafoveal region is occurring allowing us to actually process (n+1). Words are skipped because they are recognized in parafoveal vision. Prior research has also shown that skipping rates are higher for short words than long words and for common words than less common words. In addition case and contrast alteration produce longer fixation times on word n compared to normal conditions but no on n+1. The current study focused on the E-Z Reader model. It sought to test the skipping rate of n+1 and to further exam the link between fixation and skipping of n+1. 72 sentences featuring a 5 letter word followed by n+1 (half the sentences were three letters long and half four letters long). Two additional conditions were created by presenting word n in reduced contrast or case alteration. Thirty university students were presented with 12 practice sentences to calibrate the eye tracking system in which 4 featured a case alteration word, 4 featured a reduced-contrast word, and 4 were unmodified. 24 sentences per condition per participant were read. Results show that reducing contrast of word n led to increased fixation times on word n, but not n+1. When word n was manipulated through case alteration, longer fixation times on word n and n+1 occurred. These findings reflect earlier research. Interestingly, the low contrast condition reduced skipping of word n+1 and there was no difference between case alteration and the normal condition. Researchers also had results inconsistent with previous research which showed that less common word n led to reduced skipping of word n+1 and that incorrect parafoveal preview was skipped less than a correct preview. This decision to skip n+1 is influenced by not just the amount of parafoveal processing but also the ease of processing of word n. Findings from the present study suggest that we need to reevaluate the models of word processing, particularly the E-Z reader model focused on in this study.

Drieghe, D. (2008). Foveal processing and word skipping during reading. Psychonomic Bulletin & Review, 15(4), 856-860. doi:10.3758/PBR.15.4.856 Mvanfoss (talk) 00:22, 8 March 2012 (UTC)[reply]

---

Automatic Activation of Location During Word Processing

If a person was asked to point to either the ceiling or the floor, they would most likely either point up or down respectively. In interacting and learning about the world, people tend to associate the objects or events that they encounter with the word that defines it. This means that words become associated with experiential traces which are related to the locations of where these labels are made. Previous studies have shown that when people later read or hear words of objects or events, their experiential tracers (associations made at the time of learning) are reactivated. These earlier studies have shown that experiential traces are activated during word processing and they do affect sensory-motor activation as well. This study aims to find out when the activation of an entity’s location information, such as up and down, occurs after encountering that word and if the speed of reaction time is relative to the compatibility of the word and its location. They are also looking to see if the activation of the location information is relatively automatic or if it is task dependent or requires further context.

In the first experiment, participants were asked to perform a lexical decision task with words that have a referent up or down connotation while they measured the reaction time of upward or downward movement. Thirty-six native German speakers were presented with 78 German nouns and 78 nonsense filler words. These 78 German nouns included 39 words that were associated with an up connotation and 39 words that had a down connotation. The participants were asked to respond if the letter string they saw was a word – if yes, they used an upward motion and the other half used a downward motion. They measured the reaction time of the movement of the participants. The researchers found a significant interaction between the referent location of the word and the response direction of the participants. Participants were faster in determining if the letter string was a word if the direction of their response was compatible with the direction (up or down) associated with the word than if the word location and movement were incompatible.

Next, they wanted to see if this location activation occurred automatically. With the same set-up as experiment 1, they had participants respond with an upward or downward movement based on the font color. They found significant interaction between the direction of the participants’ responses and the direction connotation of the presented word. This suggests that the location information of a word is automatically activated when a word with that referent location information is seen. The researchers also performed similar experiments to control for the participants’ recognition of the location pattern as well as to control for reaction times affected by the movement of the dominant hand. These experiments found data compatible to the previous results.

In each of their four experiments, the researchers found that the participants were faster at responding to the word when the word was compatible in location with the direction of their response (for example, when the word had an up connotation and the direction of their response was also upward) than when they were opposite. This is exactly what the researchers predicted. They also found that this location information is automatically activated without further context which also supports their hypothesis. Knowing this can help people further understand why they make certain associations with words – they are based on a previous experience that involved that particular word.

Lachmair, M., De Filippis, M. & Kaup, B. (2011). Root versus roof: automatic activation of location inormation during word processing. Psychonomic Society, Inc. doi:10.3758/s13423-011-0158-x Lino08 (talk) 05:54, 8 March 2012 (UTC)[reply]

---

Brown, Susan W. (2008). Polysemy in the mental lexicon. Colorado Research in Linguistics, 21, 1-12. Ahartlin (talk) 03:08, 8 March 2012 (UTC)[reply]

---

Lanthier, S. N., Risko, E. F., Stolz, J. A., & Besner, D. (2009). Not all visual features are created equal: Early processing in letter and word recognition. Psychonomic Bulletin & Review, 16(1), 67-73. doi:10.3758/PBR.16.1.67 Hhoff12 (talk) 06:45, 8 March 2012 (UTC)[reply]

Researchers in this study were trying to determine which visual features were most important in letter and word recognition. Features work together to make letters and letters work together to make words. Previous research used confusion matrices, but they weren’t able to differentiate between the different features. We are able to determine the importance of a feature based on how the participant’s performance changes amongst the feature changes. Research done by Bierdman removed midsegments and vertices from objects. In this experiment, they determined that the removal of midsegments was not as detrimental as the removal of vertices when trying to identify objects. The current study wanted to look at letters and words to see if they could find results similar to those found with the objects. Deleting verticies from the letters and words should be more detrimental than removing the midsegments.

They used four different experiments to help determine if certain features were more important than others. In the first experiment, letters were presented normally (with nothing removed), with vertices removed, or with midsegments removed. The letters without verticies were eliminated from the experiment completely. Three versions of the letter were created. When vertices and midsegments were removed, they made sure to remove the same number of pixels from each. Participants watched the screen and a letter was presented until they made a vocal response. At that time, the experimenter marked whether or not the answer was correct. What they found in this first experiment was that there were significantly slower reaction times in the trials with midsegment and vertex deletions. Also, vertex deletion was more detrimental than midsegment deletion.

In the second experiment, they only showed participants the letters for a brief period. Previous research showed that when the duration of the exposure decreased the effects of vertex deletion increased. Sure enough, the reaction times were slower and the vertex deletion was slower than the midsegment deletion.

The third experiment looked at words instead of letters. With words, there is an element of context that helps readers identify them. Previous research showed that this context could cancel out the simple letter processing. This time, participants were only presented with two condtions, letters with midsegment deletions and letters with vertex deletions. This time, there was no significant difference between the two condtions. This suggests that context really can help eliminate the letter-level processing.

Similar to the second experiment, the fourth one looked at words, but only for a brief period. Results showed significantly slower reaction times to words with verticies missing than with midsegments missing. There were also more errors made in the verticies condition. When time is limited, removing verticies is more detrimental.

All the research culminates by saying that verticies are an important feature.

---

Word misperception, the neighbor frequency effect, and the role of sentence context: evidence from eye movements

A key — and one might argue the only — part of reading is identificaton of words. Word identification is affected by several factors, including frequency in the language and the orthography of the word. The visual input triggers activation in the lexicon, and words that are more common are activated faster than others. However, words that appear similar to each other like "gloss" and "glass" may trigger each other readily enough that it affects how the word is processed. Such words that have the same number of letters and are only one letter different are called neighbors, and Slattery tested if a low-frequency word could be misperceived as a higher-frequency neighbor (HFN).

The effects of HFN on reading were tested using eye movements. Previous research has shown that when there anomalies in the sentence, readers have issues immediately. Slattery proposed that if the problem with the sentence was misperceiving a word as an HFN, readers would fixate on it longer than the rest of the words. To test this, thirty two students were presented two kinds of sentences. In the experimental sentences, one word had an HFN, while the controls did not. Additionally, in some sentences, the target word and its associated HFN were consistent.

Words that fit with the context of the sentence were not fixated on more readily or for longer, even if the target word had an HFN. However, if an HFN conflicted with the sentence, readers looked at the target word for longer. These results are consistent with previous research, which showed that priming the lower-frequency target word could eliminate HFN misperceptions. Slattery notes that these effects occur with a relatively high frequency of 9.8%, even though participants could look at words as long as they want. Because an HFN misperception they have to deal with integrating an incorrect meaning into the current sentence, he argues that these errors are misperceptions.

Slattery, T. J. (2009). Word misperception, the neighbor frequency effect, and the role of sentence context: Evidence from eye movements. Journal Of Experimental Psychology: Human Perception And Performance, 35(6), 1969-1975. doi:10.1037/a0016894 AndFred (talk) 22:33, 8 March 2012 (UTC)[reply]

---

Parise, Eugenio; Palumbo, Letizia; Handl, Andrea; Friderici, Angela D. (2011). Influence of Eye Gaze on Spoken Word Processing: An ERP Study With Infants. Child Development, May/June, Vol. 82, No. 3, pp. 842-853. Sek12 (talk)

---

Author used the boundary paradigm to test two hypotheses that might explain why no conclusive evidence has been found for the existence of n+2 preprocessing effects. Two experiments were used to argue the controversy result form previous researches. And the result explains why there were opposite results from similar experiments before.

College students were participated in the study. Their eye movements were recorded as seeing the sentences on the screen. The experimental sentences were displayed on screen which the n+1 and n+2 display change occurred with in 9ms of a reader’s gaze crossing the boundary. After practicing, targets read 120 experimental sentences embedded in 48 filler sentences in random order. Approximately 33% of the sentences were followed by a two-alternative forced-choice comprehension question that subjects answered by pressing the button corresponding to the correct answer on a button box.

In this study, researchers were able to test two factors that might explain why some studies found evidence of parafoveal preprocessing of the second word to the right of fixation, whereas other studies found no such evidence. In Experiment 1, authors investigated whether the properties, specifically word length and word type, of the first word to the right of fixation influenced whether word n+2 could be processed parafoveally or not. Even when word n+1 was the article the – arguably, the word that can be identified with the least processing effort --- we found no evidence of parafoveal lexical preprocessing of n+2, neither when n+1 was the definite article nor when it was a nonarticle three-letter word.

In Experiment 2, researchers tested whether the amount of preprocessing of word n+2 was influenced by the frequency of word n. Again, we did not find any solid evidence of parafoveal n+2 preview processing, except in those conditions in which parafoveal information about subsequently skipped or attempted to be skipped. The only variable that showed some effects of n+2 preview even when n+1 preview was denied was landing position, although these effects might reflect low-level properties of the masks used rather than effects of lexical processing.

It is, of course, possible that the extent of parafoveal processing of word n+2 is determined by a variable not systematically manipulated in this or any previous study. This study, therefore, does not demonstrate that readers never use parafoveal information from beyond an unidentified word n+1 and word n+2 at the same time. It does, however, show that readers, at least when reading English, do not seem to make use of parafoveally available information about word n+2 on a regular basis. This implies that parallel lexical processing being the default.

Angele B, Rayner K. Parafoveal processing of word n + 2 during reading: Do the preceding words matter?. Journal Of Experimental Psychology: Human Perception And Performance [serial online]. August 2011;37(4):1210-1220. Available from: PsycINFO, Ipswich, MA. Accessed March 9, 2012. Zc.annie (User talk:Zc.annie|talk]])

---

Semantic Facilitation

Before this study, there was previous research conducted on the topic of semantic facilitation by using lexical decision tasks. Lexical decision tasks require the subject to determine if the given stimulus is a word. When two stimulus are presented, the latency to decide if they are words is shorter if the second word is primary associate of the first (“bread-butter”), this is known as the association effect. It has been found that this effect is produced at the encoding stage and the facilitation is produced more by “automatic” processes than by attention or expectancy. The effect of association on lexical decision time leads to investigation of the role of semantic relationship between words. There have been attempts to determine the various types of association between semantic judgment but allows for further study.

This experiment by Fischler builds on the previous research by eliminating direct associative strength from a set of stimuli. Sets of stimulus pairs were constructed for the experiment so the two words were not normatively associated but still had semantic similarity.

Twenty-four participants completed the experiment as part of an undergraduate psychology course experiment. The subjects were placed in front of a four-field exposure box, which was used to present the stimulus. Four groups of pair types were established: associatively related (AR), associatively unrelated (AU), semantically related (SR), and semantically unrelated (SU). These pairs were confirmed before the experiment was conducted.

During the experiment, each subject saw half of the pairs from each of the four pair type groups, giving them a total of 32 positive trials. In addition, there were 32 negative trials in each session that consisted of word-nonword, nonword-word, and nonword-nonword pairs. The order of pair presentation was random, with no more than four positive or negative trails presented consecutively. The participants were instructed to determine if the presented strings were words and to indicate their response as quickly as possible by depressing the response keys located in front of them. The latency was recorded as well as the errors.

The experiment found that the mean latency for correct responses. Both semantically related and associatively related pairs had shorter latencies than the matching control pairs. The related pairs were responded to at a significantly quicker rate than the control pairs. In addition, word pairs in the semantic condition (SR and SU) had significantly longer latencies than word pairs in the associative condition (AR and AU).

These findings were very similar to results found in earlier studies, suggesting that extending the definition of association to include mutual associates had little effect. In addition, it supports the theory that semantic facilitation can be interpreted as priming, or spreading activation throughout the semantic network. However, further research with more semantically related word pairs across the range of relatedness would be needed to examine the boundary conditions of this priming effect.

Citation: Fischler, I., (1977). Semantic facilitation without association in a lexical decision task. Memory & Cognition. doi: 10.3758/BF03197580. Anelso (talk)

Sentence Processing

Sentence Comprehension and General Working Memory

Working memory is defined in this study as the “workspace in which information is maintained and manipulated for immediate use in reasoning and comprehension tasks.” (Moser, Fridriksson, & Healy, 2007). This information is later turned into permanent information that can be retrieved at a later date. Studies in the past have shown a correlation between sentence processing and the working memory, but the interpretation of this relationship has remained unclear. The present study sought to understand this correlation better. This is an important study because there is disagreement in the cognitive construct of the working memory. Specifically, some researchers believe that there is a separate working memory for mediating sentence comprehension, while others believe it is all in one confined network. There are also clinical implications for this research that include treating aphasia patients that generally have difficulty matching sentences to pictures and deriving themes from sentences. This is said to be a symptom of the Broca’s area, and more research on the working memory and sentence processing could hopefully aid in aphasia research by localizing where these processes are occurring. In general, previous research has focused on verbal working memory tasks through a reading span task that has participants judge how truthful a statement is after hearing it, and then say the last word of as many of the sentences as possible. The problem with this type of task is that sentence processing and reading span tend to overlap, and therefore research needs to analyze nonverbal working memory tasks as well.

The present research is hoping to build on previous research on this topic by introducing a nonverbal working memory task to see if working memory is not just language based (verbal), but generally based (including also nonverbal). If sentence processing is correlated also with nonverbal working memory, then we can assume that there is not a separate area responsible for this outside of other “types” of working memory.

Sentence-Parsing (SP), Lexical Decision (LD), and non-verbal working memory tasks have been used in the past to gain understanding on how humans interpret language in different ways, and were all utilized in the present study to uncover how the working memory effected the participant’s sentence processing. The lexical decision task presented participants with both “real” words and “non-words” and required them to respond as quickly and accurately as possible to which it was. This was used as a control task.The sentence-parsing task presented the participants with both semantically-plausible (i.e. The goalie the team admired retired) and non-plausible sentences (i.e. The candy that the boy craved ate) on a computer screen and the participants (upon the onset of the last word) had to choose whether it was plausible or not by answering “yes” or “no”. The nonverbal working memory task involved in this study consisted of presenting Chinese characters to participants on a page and then later asking them to use those characters in a simple equation. They would have to decide if the equation’s solution was correct by selecting either a “no” or “yes” button. 60 of these trials were completed checking for both accuracy and reaction times.

Participants individually completed these tasks online, and then an additional nonverbal working memory task was given offline. The independent variables were the four different tasks the participant was made to do (nonverbal working memory task, SP task, LD task all of which were online, and an additional nonverbal working memory task performed offline). The dependent variables of this study were the reaction times of each task, as well as the accuracy in completing each task.

The present study found that nonverbal working memory is correlated with sentence processing, suggesting that the ability to do sentence parsing is in relation to a GENERAL working memory capacity, and is not just language specific. This is what the researchers had hypothesized in the beginning of their study, but the correlation that was yielded was only of moderate effect (.51 on average). This study reinforces the idea that there is just one single working memory capacity that is responsible for all types of language processing, not multiple working memories responsible for different tasks.

Moser, D. C., Fridriksson, J., & Healy, E. W. (2007). Sentence comprehension and general working memory. Clinical Linguistics & Phonetics, 21(2), 147-156. doi:10.1080/02699200600782526 Lkientzle (talk) 03:21, 15 March 2012 (UTC)[reply]

---

TAKING PERSPECTIVE IN CONVERSATION: The Role of Mutual Knowledge in Comprehension by Boaz Keysar, Dale J. Barr, Jennifer A. Balin, and Jason S. Brauner Amf14 (talk) 19:05, 6 March 2012 (UTC)[reply]

During a conversation, the two people involved encounter a number of sentences. Because language is ambiguous, the sentences can be interpreted in a multitude of ways. In this sense, we rely on the “mutual knowledge” between the two people speaking to each other in order to decipher what the exchange of information means. For example, if a friend asks to see “this” magazine, one may make an error in the understanding of which magazine she is requesting, because the word “this” is not specific enough. If she is referring to an object that can be seen by both individuals, it is easy to conclude that both persons have knowledge that the conversation is including the visible object. This is called a co-presence heuristic because both individuals can mutually see the magazine. However, a person may have more than one magazine in their line of vision and therefore could make an error in judgment of which magazine is being denoted. This is called an egocentric heuristic because only one person can see the second magazine.

In order to correct mistaken interpretations, we must ensure that both individuals in a conversation have a mutual knowledge of what they are speaking about. To study this concept, experiments were conducted with multiple referents visible to a speaker during a conversation so that it was unclear which object was the one intended to be referred to.

The experiments explored a difference in perspectives. Participants were given an array of objects and as they sat across from a confederate, they were asked to move these objects to new places. However, along with the object being discussed, a similar unintended referent was placed in the participant’s perspective that the confederate could not see. Therefore, when asked to move an object, their fixation would focus on both objects that could be being referred to. Eye movements were tracked to detect where the subjects eyes were looking during the experiment. When calculating the eye fixations of the participants, results showed that they looked twice as often at the unintended referent and also stared at it for much longer periods of time. They were aware that the confederate could not see the object but they still considered it as a possible object that they could be referring to during the conversation. When they finally chose which object to move, the times were significantly slower than the control group that lacked the extra referent.

The fact that the participant had the knowledge that the confederate couldn’t see the extra object, led them to correct their errors in interpretation, but at a much slower rate. This knowledge was necessary for the task to go smoothly. What was most interesting to the experimenters is that although the participant had information about what objects were inaccessible to the confederate, their eye movements still suggest that they used egocentric interpretations throughout the experiment.

These findings suggest that although mutual knowledge is necessary in a conversation, the knowledge does not restrict a person’s interpretation. They still consider all possible referents, causing them to be forced to use mutual knowledge in comprehension. This causes a delay in understanding.

---

Eye Movements of Young and Older Adults During Reading

Previous research on working memory limitations in adults while reading sentences has been largely focused through the visual moving windows paradigm. This theory allows the reader to process phrases one by one. Much of this research has revealed that age differences and the working memory deficits that accompany aging do not cause different processing strategies to be used. However, the introduction of eye tracking software to the realm calls attention to individual differences in sentence processing. Susan Kemper and Chiung-Ju Liu set out to determine if eye tracking software impacts the accuracy of this previous research.

The present study explores the differences in the processing of both ambiguous and unambiguous sentences between age groups. Specifically, the researchers examined processing of various sentence types with differing points of ambiguity. This research is deeply important as it contributes to understanding the effects of aging on speed of reading as well as the length of time it takes an individual to clarify ambiguity in a given sentence.

Experiment one examined differences in the processing of various sentences between younger and older adults. The use of eye tracking software was of primary importance in assessing these differences. Sentence types used by the experimenters included pairs of cleft object and cleft subject sentences as well as subject relative clause sentences and object relative clause sentences. Each type of sentence featured a critical region which participants were predicted to focus on more intently than other portions of the sentences. Participants were randomly assigned to lists of sentences and asked to read them while their eye movements were tracked. Additionally, participants completed a sentence acceptability judgment task in which comprehension of the sentences was assessed. Four fixation measurements were collected through eye movement software which indicated the length of time participants focused on the critical region of each sentence. Experiment two followed procedures identical to those of experiment one, but featured sentences in which ambiguity was increased by deleting the word “that” from each sentence.

Results from experiment one indicated that cleft subject and subject relative sentences were much more difficult to process for both age groups. However, older adults experienced greater difficulty in processing the sentences, suggesting that limitations occurring in the working memory of older adults affect the ability to resolve ambiguities. Experiment two revealed that even more time was needed in sentence processing when the word “that” was eliminated from each sentence structure, especially for older adults. Ultimately, the findings of both experiments revealed that older adults had to regress to the critical regions of the sentences more often and required additional time to process ambiguities in the sentences. Older adults cannot hold all components of a sentence in their working memory while processing, and therefore must reprocess ambiguities several times. Through the use of eye tracking software, the present study reveals that younger and older adults may in fact differ in their abilities to process and clarify ambiguities in various sentence structures. These findings contradict previous research stating that there is little difference between age groups in sentence processing. These findings are of great importance as they contribute to understandings of the effects of aging on working memory.

Kemper, S., & Liu, C. (2007). Eye movements of young and older adults during reading. Psychology And Aging, 22(1), 84-93. doi:10.1037/0882-7974.22.1.84 Kfinsand (talk) 03:00, 8 March 2012 (UTC)[reply]

---

Effects of Age, Speech of Processing, and Working Memory on Comprehension of Sentences With Relative Clauses (Caplan, DeDe, Waters, Michaud) Katelyn Warburton (talk) 20:46, 8 March 2012 (UTC)[reply]

Previous studies have examined the effects of age on comprehension and memory, and show a general decline in abilities as age increases. Past research has mainly focused on comparisons between extreme age groups or within certain age groups. The current study focuses on the influence of age on the speed in which an individual is able to process and remember information from specific sentences.

This study was conducted using two different experiments: the first presented a plausibility judgment task and the second was a question-answer verification task. Two-hundred individuals divided among four age groups (70-90, 50-69, 30-49, and 18-29) were recruited through Boston University and the Harvard Cooperative Program on Aging. In the first experiment, twenty-six pairs of sentences of four types (Cleft-Subject, Cleft-Object, Subject-Subject, and Subject-Object), either implausible or plausible, were presented. Participants were asked to respond “yes” or “no” to the sentences they read, indicating whether or not they made sense.

In the second experiment, thirty-six stimuli sentence pairs were presented—they contained a sentential complement (SC) with a relative clause and a doubly-embedded relative clause (RC) in another. For example, a SC would read “The dealer/indicated that/the jewelry/that/was identified/by the victim/implicated/one of his friends,” whereas an RC would be written as follows, “The dealer/who/the jewelry/that/was identified/by the victim/implicated/was arrested by the police.” Participants were asked to make a yes or no decision involving comprehension of these statements.

Results showed that there was a significant, negative correlation with both age and working memory as well as age and speed of processing. These results replicated several previous findings, but this study was the first to show the effects of the speed of processing on comprehension in sentences with relative clauses. Results also showed that working memory supports the controlled, conscious process that requires storage of linguistic models instead of being automatic and unconscious.

This study suggests that future research should evaluate the nature of online processing and task-related processing to understand the distinctions we are able to make about complex thoughts, concepts, and sentences. Future studies should also evaluate the differences within age categories to more specifically understand what factors can influence working memory and sentence comprehension.

Caplan, D., DeDe, G., Waters, G., & Michaud, J. (2011). Effects of age, speed of processing, and working memory on comprehension of sentences with relative clauses. Psychology and Aging, 26(2), 439-450.

---

Pay now or pay later: Aging and the role of boundary salience in self-regulation of conceptual integration in sentence processing Smassaro24 (talk) 18:46, 10 March 2012 (UTC)[reply]

Among other factors that change as a person ages, language processing is one that shows multiple differences as one’s age progresses. When reading, people pause between syntactic boundaries in order to process the information they have read. These “micro-pauses” are known as “wrap-up,” and can create longer-enduring representations of the information read, that is used later in interpreting the sentence. Previous research supports the idea that wrap-up time increases with age. Researcher Elizabeth Stine-Morrow and fellow researchers examined the wrap-up for texts with different strengths of syntactic boundaries. Using the same material for the sentences, they varied the boundaries from: unmarked (no boundary), to weakly marked (added commas), to strongly marked (used a period at the end of a sentence). They aimed to determine whether a greater wrap-up time at a clear syntactic boundary (such as a period) could influence the length of wrap-up at a subsequent boundary within the sentence, and to determine whether an age difference existed.

In the initial study, the participants consisted of twelve older adults, with an average age of 65.7 years, and twelve younger adults, with an average age of 22.9 years. The participants were asked to read passages of text using the moving window method, in which the readers revealed each word of text individually by pressing a button. Each passage included unmarked, weakly marked and strongly marked boundaries. The task measured processing time for each word segment.

The results of the study indicate that the stronger the boundary, the longer the reader’s wrap-up time is. The study also supports a “pay now or pay later” effect, in which the longer wrap-up early-on in the sentence can reduce the wrap-up time needed to conceptualize the information at the end of the sentence. Another finding, in accordance with early research, found that when boundaries were unmarked, the older readers used more frequent wrap-up pauses, and these occurred earlier in the sentence processing process.

A follow-up study aimed to replicate the data of experiment 1, using an eye-tracking device in place of the moving window method. The participants were asked to read the passages from a computer screen, while wearing a device mounted on their head that measured where their eyes were directed at all times. The data from experiment 2 supported and exaggerated the prior experiment’s conclusion that older readers showed longer and more frequent wrap-up.

The results of the study suggest that wrap-up can be activated by the presence of clearer syntactic boundaries. This finding can affect how the lexical meaning of the sentence is processed before the sentence has been fully read, which contributes to a shorter wrap-up time later in processing the sentence. The studies also show the tendency of older adults to use more wrap-up time at breaks in the syntactic boundaries, suggesting a downstream process advantage which helps older readers consolidate the meaning of the material they have read, making the reading more efficient. Both the strongly marked syntactic boundaries, as well as longer wrap-up times, helped the participants, especially the older readers, to understand the meaning of the passages they read.

Stine-Morrow, E. L., Shake, M. C., Miles, J. R., Lee, K., Gao, X., & McConkie, G. (2010). Pay now or pay later: Aging and the role of boundary salience in self-regulation of conceptual integration in sentence processing. Psychology And Aging, 25(1), 168-176. doi:10.1037/a0018127

---

Effects of Age, Speed of Processing, and Working Memory on Comprehension of Sentences

Unlike Peter Pan and the Lost Boys, aging, and the consequences that come with it, affect everybody. One of these consequences is the ability to quickly process and understand sentences. Two theories exist in regards to aging and sentence processing. The first theory states that as people age, they have more experience with syntactic structures which allows for more efficient process. However, other theorists suggest that with declining mental faculties, sentence comprehension and interpretation may be slowed. Some studies suggest that elderly people with lower working memory capacities took longer in understanding and interpreting an ambiguous sentence when compared to young adults. However, another study found no age differences between the auditory comprehension in older and younger adults. This study attempted to assess the affect of age, processing speeds, and working memory on sentence comprehension in both young and older adults.

The researchers created two experiments to study the effects of age, processing speed, and working memory on sentence comprehension. One experiment presented sentences in a plausibility judgment task and the other that was a verification task. The study included 200 participants in 4 categories: old elderly (ages 70-90), young elderly (ages 50-69), old young (ages 30-49), and young young (18-29 years). Participants were asked to complete 3 tests of working memory capacity: alphabet span, subtract 2 span, and sentence span. Next participants went through 4 different measures of processing speed test: digit copying, boxes, pattern comparison, and letter comparison.

To test the participants’ sentence comprehension in experiment one, 4 different types of sentences were used: cleft-subject, cleft-object, subject-subject, and subject-object. The cleft sentences start with “it was” and then continue on. The subject sentences have the subject before the verb and the object sentences have the direct object first, then the subject, and finally the verb. In experiment two, the researchers created sentences that had only one relative clause – a “that” clause or sentences that had one relative clause included in another relative clause. To actually test sentence comprehension, participants used a self-paced reading paradigm in which dashes would appear on the screen. Each time the participant pressed a key one part of the sentence would appear. After pressing the key again, that part would disappear and the next part of the sentence would be revealed.

After analyzing the results of their two experiments, the researchers found that working memory and speed of processing were positively correlated. They also found that accuracy of sentence comprehension was positively correlated with working memory. The results from experiment one show that reading times for the cleft-object were longer than for the cleft-subject sentences and the reading times for subject-object sentences were longer than those for the subject-subject sentences. These effects of age and working memory on accuracy and reading times correspond with results from previous studies. The researchers found that the correlations between age, speed of processing, and working memory to comprehension measures followed what they predicted – that they would increase with age and decrease with faster speed of processing and a larger working memory. Their analysis showed that older individuals spend more time reading/processing the sentence but had poorer comprehension of the sentences. This research shows that age does have an effect on sentence comprehension and that more research should be done to see if there is anything the aging population can do in order to slow this decline in comprehension ability so that they can remain functional in society for longer.

---

Caplan, D., DeDe, G., Waters, G., & Michaud, J. (2011). Effects of Age, Speed of Processing, and Working Memory on Comprehension of Sentences with Relative Clauses. Journal of Psychology and Aging 26 (2) 439-450. doi:10.1037/a0021837 138.236.22.152 (talk) 01:25, 13 March 2012 (UTC)[reply]

These researchers were concerned with the lexical processing effects we Americans have when we are processing a sentence. Their first hypothesis was that during lexical processing, we should see a greater or lesser word activation based on the contextual word that follows the previous one. The second hypothesis that they were concerned with was that once the target word has been activated, the words would find a particular pattern. Basically they were trying to find prosody while processing sentences. The experimenters used forty-three participants from the university of Rochester and the surrounding community. All of these participants were English native speakers with normal hearing and normal-corrected vision. In the experiment, they tested the participants to see if the target word or the competitor word were activated after processing the sentence (for example; target word is antlers and competitor word was ant). Each stimulus that was tested contained four pictures: a function, a target word, a competitor word, and two distracters that were phonologically irrelevant to the target word. To stop the participants from becoming familiar with the target words, the experimenters added twenty filler sentences that had nothing to do with the experiment. During the experiment, the authors shifted the frequency of the pitches in the sentences. There were two different pitches: the first was low-high pitch (the first five words were low pitched prosodically aligned so the pitches were the same) and the second was high-low pitch (the first five words were high pitched prosodically aligned so the pitches were the same). During the procedure, the experimenters recorded eye movement. Each trial began with four pictures appearing on the screen. Then one of the prerecorded sentences was presented. The participants were to click on the particular picture that relates to the target word. It is important to note that the pictures were also randomly ordered on the screen. Three of the participants were excluded from the data because of low testing accuracies or because of the incompletion of the trials. The overall data suggested that the experiment was rather easy. The participants only chose the incorrect picture three percent of the time. In the perceptual grouping process, the high-low frequencies were more likely to choose the competitor word than the low-high frequency. The authors believe that this experiment accurately portrays the perceptual grouping process. These results suggest that the competitor word was more likely to be activated when the target word is stated. There data supports the notion that natural utterances are more easily processed compared to unnatural utterances. In the future the experimenters want to work on natural conversational speech instead of sentences that are rarely stated. They also want to work on the prosody of the natural sentences too.

---

Brown, Meredith; Salverda, Anne Pier; Dilley, Laura C.; Tanenhaus, Michael K.; Psychonomic Bulletin & Review, Vol 18(6), Dec, 2011. pp. 1189-1196. Gmilbrat (talk)

---

Sentence comprehension in young adults with developmental dyslexia By Wiseheart, Altmann, Park, and Lombardino. Misaacso (talk) 02:27, 14 March 2012 (UTC)[reply]

Researchers are looking at the causes of dyslexia in regards to a deficit in working memory instead of a deficit in linguistic processing. Working memory is a type of short-term memory that assists in conscious and linguistic processing. Past research linked working memory as the weakness in understanding sentences. Results of the research revealed that there are phonological deficits in the working memory of children with dyslexia. The children are unable to successfully perform syntactic tasks including grammar judgments and sentence correction. Some sentences require information to be held in the working memory for a longer period of time which can be problematic for individuals with unsatisfactory working memory. When the clause of the sentence is in the center, the subject has to stay ready for recall in the working memory until the verb is presented so the meaning of the sentence is clear. When it is a center clause that is object-relative (The man that the woman is pulling pulls the dog), it is thought that there is more demand on the working memory to keep vital information present as compared to a center clause that is subject-relative (The man that is pulling the woman pulls the dog). It is not known what part of working memory assists a normally developed adult in processing complex sentences because much of the research deals with children and dyslexia, not adults. Early difficulties of language development are predictive of later reading issues but, it is not known which direction the cause of such deficiencies is.

Researchers wanted to find out if difficulties would continue to adulthood and cause issues with reading ability if children had difficulties with sentence comprehension. They hypothesized that adults with dyslexia would not do as well as controls regarding tasks that were strenuous on the working memory and challenged understanding of syntactic structures. The experiment used young adults, both with and without dyslexia by using a sentence and picture matching game. Researchers wanted to see how syntactic complexity and working memory influenced the ability of subjects to process sentences. Experiment one dealt with active and passive voice in the sentences while in experiment three, four types of clause sentences were used to see how people responded to and comprehended the sentences. In both experiments, the dependent variables were accuracy and response times on the comprehension task.

Participants who were eligible for the study completed working memory, vocabulary, and sentence comprehension tasks. In the memory task, participants completed the digit span forward and digit span backward which comprised of participants reciting an increasingly long line of numbers, either as they had heard it or in backwards order. For vocabulary, participants completed tests that had them define various words. The sentence comprehension task had the participants indicate whether or not a picture matched the sentence provided.

In experiment one, all participants responded more slowly to passive than active sentences but participants with dyslexia were marginally slower. The accuracy of people with dyslexia was significantly slower than controls but all were less accurate with passive sentences than active sentences. In experiment two, people with dyslexia did more poorly on center-embedded relative clause sentences. In sentences with relative clauses, decision time was not impacted by working memory or word reading times. Overall, the findings that were important are that the participants with dyslexia were different than controls in their accuracy to understand passive sentences, response times for passive sentences, and accuracy of interpreting the complex sentences provided. The hypothesis was supported because it was found that there are syntactic comprehension deficits among young adults with dyslexia that could be a feature of that disorder

Wiseheart, R., Altmann, L. J. P., Park, Heeyoung & Lombardino, L. J. (2009). Sentence comprehension in young adults with developmental dyslexia. Ann. of Dyslexia, 59, 151-167. doi: 10.1007/s11881-009-0028-7 Misaacso (talk) 05:28, 15 March 2012 (UTC)[reply]

---

Arnold, J. E., Tanenhaus, M. K., Altmann, R. J., & Fagnano, M. (2004). The old and thee, uh, new: Disfluency and reference resolution. Psychological Science, 15(9), 578-582. doi:10.1111/j.0956-7976.2004.00723.x Hhoff12 (talk) 14:40, 15 March 2012 (UTC)[reply]

---

Processing Coordination Ambiguity by Engelhardt and Ferreira There are two types of models of sentence processing developed from previous research: restricted and unrestricted. The Garden Path model is the most famous of the restricted models and says that the processer uses syntactic information to determine meanings in order of simplicity. The Constraint-based model is the primary unrestricted model and says that the sentence processor uses things like contextual information and occurrence frequency to form an initially more complex meaning. From previous research we’ve learned that the use of a visual modifier can resolve ambiguities of phrases which can be interpreted as either a location or a modifier (ie “put the apple on the towel in the box”).

The current study investigated coordination ambiguity, the ambiguity that arises when two phrases are combined ie. put the butter in the bowl and the pan on the towel with the noun phrase the pan referring to either a location for the butter or an object to be moved. Coordination ambiguity is common and depends on whether the noun phrase following the conjuction [of two sentences] is a complex object or the subject of a joined sentence. Previous studies suggest that slower reading times occur because noun-phrase coordination is syntactically simpler than sentence coordination. Participants received instructions of three types 1) noun-phrase coordination 2) ambiguous sentence coordination 3) unambiguous sentence coordination. Eye movements were monitored with a head mounted eye tracker. There were 33 visual displays: 3 practice, 15 experimental items, and 15 fillers. The experimental items were one of the three types. Researchers predicted that listeners will make predictable fixations to objects on display as they hear the corresponding words. They also predicted a connection between gaze and planned hand movements related to pointing or grabbing. The study revealed a preference for noun-phrase coordination which is surprising because in their pre experiment analysis of sentences they found that sentence coordination was 3 times more prevalent then noun-phrase coordination. They also found that there was still high fixation even with the unambiguous sentence instruction (although it could have been because it visually interesting). Another surprising finding was that the garden-path effect was observed only during the ambiguous noun phrase (contrary to previous research).

Based on the results, the researchers feel they can rule out unrestricted processing models. A simplicity heuristic is instead the favored basis of processing.

Engelhardt, P. E., & Ferreira, F. (2010). Processing coordination ambiguity. Language And Speech, 53(4), 494-509. doi:10.1177/0023830910372499Mvanfoss (talk) 01:02, 15 March 2012 (UTC)[reply]

---

“Integration of Visual and Linguistic Information in Spoken Language Comprehension”

It has been thought that as a spoken sentence unfolds it is structured by a syntactic processing module. Previous research has shown that brain mechanisms are responsible for the rapid structuring on input, but it has not been determined if the early moments of syntactic processing can be influenced by removing or reducing nonlinguistic information in the environment. In addition,

However, with new technology, Tanenhaus and his colleagues were able to use the tracking of eye movements to provide insight into the many processes involved with language comprehension. In particular, when subjects were given complex instructions they had sequenced eye movements that were closely following the words spoken to them and were relevant in establishing reference. From their experiments they found an integration of visual and linguistic information They were interested in determining if information provided in a visual context affects syntactic processing. Specifically, they investigated if the information provided by visual context affects the processing of a subject after given an instruction.

Six subjects were presented with either ambiguous or unambiguous instructions. These sentences were: “Put the apple on the towel in the box” and “Put the apple that’s on the towel in the box”. These instructions were then paired with different visual contexts (one-referent that supported the destination interpretation of the instruction, and a two-referent context that supported the modification interpretation). This created four experiment conditions. The subjects’ eye movements were tracked while presented with the sentence and visual context in order to determine how visual information is used while processing both ambiguous and unambiguous sentences.

The results of the eye movement fixation patterns of the six subjects suggested that in a natural context, visual references are sought in the early moments of linguistic processing. Specifically, “on the towel” was originally seen as a destination in the first one-referent context but as a modifier in the two-referent context. In addition, when presented with an unambiguous sentence, there was never any confusion presented in the eye movements. The participants did not look at incorrect destinations as they did with the ambiguous instruction.

These results are important because they shed light onto the way that language is processed using cues from the surrounding environment. It suggests that the approaches to language comprehension and processing must integrate the different processing systems. The approaches in the future must combine both linguistic and non-linguistic information.

Tanenhaus, M.K., M.J. Spivey-Knowlton, K.M. Eberhard, J.C. Sedivy (1995). Integration of Visual and Linguistic Information in Spoken Language Comprehension. Science. Doi: 10.1126/science.7777863. Anelso (talk)

---

Grammatical and resource components of sentence processing in Parkinson’s disease

Because sentence comprehension requires not only linguistic skills, but also cognitive tasks, this study aimed to specify which area of sentence comprehension adults with Parkinson’s disease struggle with. It is known that Parkinson’s disease patients have difficulty with understanding sentences that are grammatically complex. However, this may not be entirely due to linguistic impairment. Recent studies have shown that their difficulties concerning comprehension may be related to cognitive resources such as working memory and information-processing speed. FMRI studies suggest that frontal-striatal brain regions are recruited for the cognitive tasks involved in sentence processing. However, this area is compromised in patients with Parkinson’s disease, and it has been shown in previous studies that it is activated less in nonlinguistic tasks that require similar problem solving. In this study, they wanted to combine these previous studies, and determine whether the limitations of this area in patients with Parkinson’s disease affected the cognitive processes necessary in understanding grammatically complex sentences.

The study included seven right-handed patients with Parkinson’s disease, but no dementia, and nine healthy seniors of the same age. They were presented with four sentences. They tested grammatical aspects of comprehension by placing clauses in the sentences that were either subject-relative or object-relative. The first was subject-relative with short linkage: “The strange man in black who adored Sue was rather sinister in appearance.” The second was subject-relative with long linkage: “The cowboy with the bright gold front tooth who rescued Julia was adventurous.” The third was object-relative with short linkage: “The flower girl who Andy punched in the are was five years old.” Finally, the fourth sentence was object-relative with long linkage: “The messy boy who Janet the very popular hairdresser grabbed was extremely hairy.” The head noun being separated from where the displaced noun phrase is interpreted tested working memory. As soon as subjects knew the answer to the question provided at the beginning of each run, “Did a male or female perform the action described in the sentence?” they would press a button, which would cause the presentation of the following sentence.

Results showed that all subjects activated brain regions associated with grammatical processing, such as posterolateral temporal and ventral inferior frontal regions of the left hemisphere. However, there was an important observation the authors recognized. Healthy seniors activated brain regions in the frontal-striatal brain regions used for cognitive tasks, especially on the sentences with long linkage, suggesting a greater use of working memory. These areas include left dorsal inferior frontal, right posterolateral temporal, and striatal regions that are also associated with cognitive functions during sentence processing. While the patients with Parkinson’s disease accessed these regions less, they activated different regions much more. They had increased activation of the right inferior frontal and left posterolateral temporal-parietal areas. The authors suggest that this compensatory up-regulation of certain brain regions allows the patients with mild cases of Parkinson’s disease, like those in this study, to maintain accurate sentence comprehension.

Grossman M, Cooke A, DeVita C, Lee C, Alsop D, Detre J, Gee J, Chen W, Stern MB, Hurtig HI. Grammatical and resource components of sentence processing in Parkinson’s disease: an fMRI study. Neurology. 2003;60:775–781. Doi: 10.1212/01.WNL.0000044398.73241.13 Lcannaday (talk) 16:13, 15 March 2012 (UTC)[reply]

---

Bilingualism

The Forgotten Treasure: Bilingualism and Asian Children's Emotional and Behavior Health (Wen-Jui Hann and Chien-Chung Huang) Katelyn Warburton (talk) 20:32, 8 March 2012 (UTC)[reply]

---

English Speech sound development in preschool-aged children from bilingual English-Spanish environments (Gildersleeve-Neumann, C. E., Kester, E. S., Davis, B. L., & Peña, E. D.) Kfinsand (talk) 07:36, 13 March 2012 (UTC)[reply]

---

Taking Perspective in Conversation: The Role of Mutual Knowledge in Comprehension TaylorDrenttel (talk) 20:47, 13 March 2012 (UTC)[reply]

Language is ambiguous. Comprehension requires deciphering the meaning of context, words, intentions and many other elements. In conversations understanding the information each speaker holds also helps to eliminate ambiguity. Previous theories of language suggest that people share a mutual perspective in comprehension. For example, if you are sitting at a table with a friend with a book in between you, and your friend asks to see that book. There may be other books around you, but since you both mutually see the book on the table, you assume it is the intended item. The authors of this article attempted to delve deeper into this theory. They believe that people use an egocentric heuristic initially in conversations. An egocentric heuristic means that people consider objects that are not seen or known to the other person, but are potential possibilities from one’s own perspective. Furthermore, the experiments in this article attempt to identify if mutual knowledge is used to correct errors that result from an egocentric interpretation.

In the first experiment twenty native English speakers played a version of the referential communication game. A shelf with 4x4 slots contained seven objects, which occupied various slots. The participant sat one side of the shelf where all the objects were visible, and the confederate sat on the other side where some objects were hidden. The confederate was given a picture and directed the participant where to move the various objects to match the picture. At some point during the task the confederate gave an ambiguous direction, which could refer to two objects: one that was visible to both and one only visible to the participant. An eyetracker was used to follow the participant’s eye movements and identify how long they looked at the objects. These trials were compared to a control condition where the object in the hidden spot changed, so it was not a potential possibility in the ambiguous direction. For example, the ambiguous direction was to move the small candle one slot right. The confederate can only see one candle, but the participant can see two. In the control condition a monkey, or object completely unrelated, would be in the hidden spot.

On average, the participant’s eyes fixated on the hidden spot almost twice as long when it contained a possible target (test condition) versus an unrelated object (control condition). Furthermore, in the test condition the participant’s initial eye movements were the fastest to the hidden object. The initial fixation on the shared object was delayed in the test condition compared to the control condition. These results show that the egocentric heuristic interferes with their ability to consider the mutually shared object. In some cases the egocentric interpretation was so strong that people picked the hidden object even though they knew the confederate could not see it.

The second experiment was identical to the first except participants helped set up the arrays, so they knew exactly what could not be seen from the confederate’s perspective. The same results were found as in the first experiment.

These results confirm the researchers expectations that participants occasionally use an egocentric interpretation, which considers certain objects not within the speakers view. The egocentric heuristic, though prone to errors, may be used because it is the more efficient strategy and requires less cognitive effort. Overall this research furthers the understanding of language comprehension and mental processes in conversations.

Keysar, B., Barr, D.J., Balin, J. A., & Brauner, J. S. (2000). Taking perspective in conversation: the role of mutual knowledge in comprehension. Psychological Science, 11(1), 32-38. doi:10.1111/1467-9280.00211

---

Language mixing in bilingual speakers with Alzheimer’s dementia : a conversation analysis approach Amf14 (talk) 03:01, 15 March 2012 (UTC)[reply]