Download PDFOpen PDF in browserSemantic Structure, Speech Units and Facial Movements: Multimodal Corpus Analysis of English Public Speaking15 pages•Published: June 5, 2017AbstractThis study examines connections between the semantic structure and speech units, and characteristics of facial movements in EFL learners’ public speech. The data were obtained from a multimodal corpus of English public speaking constructed from digital audio and video data of an official English speech contest held in a Japanese high school. Evaluation data of contest judges were also included. For the audio data, speech pauses were extracted with an acoustic analysis software, and the spoken content (text) of each speech unit embedded between two pauses was then annotated. The semantic structures of the speech units were analysed based on segmental chunks of clauses. Motion capturing was applied on video data; forty-two tracking points were set on each speaker's eyes, nose, mouths and face lines. The results indicated: (1) Speakers with higher evaluations showed a similar semantic structure pattern in their speech units. It was also confirmed as similar to that for NSE samples. (2) Horizontal facial movements and the angles of face rotations were extracted from motion capturing. The result is expected to be useful for defining a facial movement model that effectively describes good eye contacts in public speaking.Keyphrases: acoustic analysis, english education, motion analysis, multimodal corpora, public speech In: Antonio Moreno Ortiz and Chantal Pérez-Hernández (editors). CILC2016. 8th International Conference on Corpus Linguistics, vol 1, pages 447-461.
|