A4 Article in conference proceedings
What Comes First : Combining Motion Capture and Eye Tracking Data to Study the Order of Articulators in Constructed Action in Sign Language Narratives (2020)


Jantunen, T., Puupponen, A., & Burger, B. (2020). What Comes First : Combining Motion Capture and Eye Tracking Data to Study the Order of Articulators in Constructed Action in Sign Language Narratives. In N. Calzolari, F. Béchet, P. Blache, K. Choukri, C. Cieri, T. Declerck, S. Goggi, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, & S. Piperidis (Eds.), LREC 2020 : Proceedings of the 12th Conference on Language Resources and Evaluation (pp. 6003-6007). European Language Resources Association. LREC proceedings. https://www.aclweb.org/anthology/2020.lrec-1.735.pdf


JYU authors or editors


Publication details

All authors or editorsJantunen, Tommi; Puupponen, Anna; Burger, Birgitta

Parent publicationLREC 2020 : Proceedings of the 12th Conference on Language Resources and Evaluation

Parent publication editorsCalzolari, Nicoletta; Béchet, Frédéric; Blache, Philippe; Choukri, Khalid; Cieri, Christopher; Declerck, Thierry; Goggi, Sara; Isahara, Hitoshi; Maegaard, Bente; Mariani, Joseph; Mazo, Hélène; Moreno, Asuncion; Odijk, Jan; Piperidis, Stelios

Place and date of conferenceMarseille, France11.-16.5.2020

ISBN979-10-95546-34-4

Journal or seriesLREC proceedings

eISSN2522-2686

Publication year2020

Pages range6003-6007

Number of pages in the book7353

PublisherEuropean Language Resources Association

Publication countryFrance

Publication languageEnglish

Persistent website addresshttps://www.aclweb.org/anthology/2020.lrec-1.735.pdf

Publication open accessOpenly available

Publication channel open accessOpen Access channel

Publication is parallel published (JYX)https://jyx.jyu.fi/handle/123456789/71022


Abstract

We use synchronized 120 fps motion capture and 50 fps eye tracking data from two native signers to investigate the temporal order in which the dominant hand, the head, the chest and the eyes start producing overt constructed action from regular narration in seven short Finnish Sign Language stories. From the material, we derive a sample of ten instances of regular narration to overt constructed action transfers in ELAN which we then further process and analyze in Matlab. The results indicate that the temporal order of articulators shows both contextual and individual variation but that there are also repeated patterns which are similar across all the analyzed sequences and signers. Most notably, when the discourse strategy changes from regular narration to overt constructed action, the head and the eyes tend to take the leading role, and the chest and the dominant hand tend to start acting last. Consequences of the findings are discussed.


Keywordssign languageFinnish Sign Languagemotion captureeye tracking

Free keywordsmotion capture; eye tracking, sign language; constructed action; narration


Contributing organizations


Related projects


Related research datasets


Ministry reportingYes

Reporting Year2020

JUFO rating1


Last updated on 2024-22-04 at 23:22