Bilingualism in Multimodal Language Processing: A priming study on processing of gestures in English temporal expressions

University essay from Lunds universitet/Kognitionsvetenskap

Abstract: At its core, language is multimodal (Kendon, 1986; McNeill, 1994), and information presented through different channels of information, such as visually, in the shape of gestures, or verbally, in the shape of speech or signs, together facilitate online language processing (Kelly, Healey, Özyürek, & Holler, 2015; Kelly, Özyürek, & Maris, 2010). This thesis extends previous studies on multimodal processing (Kelly et al., 2015) into the domain of TIME, and additionally investigates the influence of bilingualism on integration of speech and gestures in a priming experiment. The task of 75 monolingual speakers of English and 75 English-Mandarin Chinese bilingual participants was to decide whether a written prime (PAST or FUTURE) was related to different temporal expressions in English. The temporal expressions were accompanied by a matched or mismatched gesture along the sagittal line (front to back). Response accuracy and response times were analysed with two Bayesian generalised linear mixed models. Gesture (mis)match was shown to have an effect on response time (mismatched trials were predicted to have longer response times of approx. 150 ms). Accuracy, on the other hand, was not influenced by gesture (mis)match. No certain effect of bilingualism was found for response accuracy, nor response time. An interaction effect between gesture (mis)match and bilingualism was not found either. This study therefore fails to show any effect of bilingualism on multimodal language processing, but provides further support for the integrated-systems hypothesis, according to which gesture and speech are integrated automatically and early in language comprehension.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)