A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä
Multitask deep learning for native language identification (2020)


Habic, V., Semenov, A., & Pasiliao, E. L. (2020). Multitask deep learning for native language identification. Knowledge-Based Systems, 209, Article 106440. https://doi.org/10.1016/j.knosys.2020.106440


JYU-tekijät tai -toimittajat


Julkaisun tiedot

Julkaisun kaikki tekijät tai toimittajat: Habic, Vuk; Semenov, Alexander; Pasiliao, Eduardo L.

Lehti tai sarja: Knowledge-Based Systems

ISSN: 0950-7051

eISSN: 1872-7409

Julkaisuvuosi: 2020

Volyymi: 209

Artikkelinumero: 106440

Kustantaja: Elsevier BV

Julkaisumaa: Alankomaat

Julkaisun kieli: englanti

DOI: https://doi.org/10.1016/j.knosys.2020.106440

Julkaisun avoin saatavuus: Ei avoin

Julkaisukanavan avoin saatavuus:

Julkaisu on rinnakkaistallennettu (JYX): https://jyx.jyu.fi/handle/123456789/72022


Tiivistelmä

Identifying the native language of a person by their text written in English (L1 identification) plays an important role in such tasks as authorship profiling and identification. With the current proliferation of misinformation in social media, these methods are especially topical. Most studies in this field have focused on the development of supervised classification algorithms, that are trained on a single L1 dataset. Although multiple labeled datasets are available for L1 identification, they contain texts authored by speakers of different languages and do not completely overlap. Current approaches achieve high accuracy on available datasets, but this is attained by training an individual classifier for each dataset. Studies show that joint training of multiple classifiers on different datasets can result in sharing information between the classifiers, leading to an increase in the accuracy of both tasks. In this study, we develop a novel deep neural network (DNN) architecture for L1 classification; it is based on an adversarial multitask learning method that integrates shared knowledge from multiple L1 datasets. We propose several variants of the architecture and rigorously evaluate their performance on multiple datasets. Our results indicate the proposed multitask architecture is more efficient in terms of classification accuracy than previously proposed methods.


YSO-asiasanat: luonnollinen kieli; äidinkieli; englannin kieli; tekstinlouhinta; koneoppiminen

Vapaat asiasanat: multitask learning; text classification; natural language processing; deep learning


Liittyvät organisaatiot


Hankkeet, joissa julkaisu on tehty


OKM-raportointi: Kyllä

Raportointivuosi: 2020

JUFO-taso: 1


Viimeisin päivitys 2022-20-09 klo 14:55