2019
Conferences
Bernard Espinasse; Sébastien Fournier; Adrian Chifu; Gaël Guibon; René Azcurra; Valentin Mace
On the Use of Dependencies in Relation Classification of Text with Deep Learning Conference
20th International Conference on Computational Linguistics and Intelligent Text Processing (CICLing2019), CICLing2019 2019.
Abstract | Links | BibTeX | Tags: Compositional Word Embedding, Deep Learning, Dependencies, Relation Classification, Word Embedding
@conference{Espinasse2019,
title = {On the Use of Dependencies in Relation Classification of Text with Deep Learning},
author = {Bernard Espinasse and Sébastien Fournier and Adrian Chifu and Gaël Guibon and René Azcurra and Valentin Mace},
url = {https://hal.archives-ouvertes.fr/hal-02103919/document},
year = {2019},
date = {2019-04-07},
urldate = {2019-04-07},
booktitle = {20th International Conference on Computational Linguistics and Intelligent Text Processing (CICLing2019)},
series = {CICLing2019},
abstract = {Deep Learning is more and more used in NLP tasks, such as in relation classification of texts. This paper assesses the impact of syntactic dependencies in this task at two levels. The first level concerns the generic Word Embedding (WE) as input of the classification model, the second level concerns the corpus whose relations have to be classified. In this paper, two classification models are studied, the first one is based on a CNN using a generic WE and does not take into account the dependencies of the corpus to be treated, and the second one is based on a compositional WE combining a generic WE with syntactical annotations of this corpus to classify. The impact of dependencies in relation classification is estimated using two different WE. The first one is essentially lexical and trained on the Wikipedia corpus in English, while the second one is also syntactical, trained on the same previously annotated corpus with syntactical dependencies. The two classification models are evaluated on the SemEval 2010 reference corpus using these two generic WE. The experiments show the importance of taking dependencies into account at different levels in the relation classification.},
keywords = {Compositional Word Embedding, Deep Learning, Dependencies, Relation Classification, Word Embedding},
pubstate = {published},
tppubtype = {conference}
}
Deep Learning is more and more used in NLP tasks, such as in relation classification of texts. This paper assesses the impact of syntactic dependencies in this task at two levels. The first level concerns the generic Word Embedding (WE) as input of the classification model, the second level concerns the corpus whose relations have to be classified. In this paper, two classification models are studied, the first one is based on a CNN using a generic WE and does not take into account the dependencies of the corpus to be treated, and the second one is based on a compositional WE combining a generic WE with syntactical annotations of this corpus to classify. The impact of dependencies in relation classification is estimated using two different WE. The first one is essentially lexical and trained on the Wikipedia corpus in English, while the second one is also syntactical, trained on the same previously annotated corpus with syntactical dependencies. The two classification models are evaluated on the SemEval 2010 reference corpus using these two generic WE. The experiments show the importance of taking dependencies into account at different levels in the relation classification.
2016
Conferences
Adrian-Gabriel Chifu; Sébastien Fournier
20th International Conference on Knowledge Based and Intelligent Information and Engineering Systems, vol. 96, KES2016 Elsevier, 2016.
Abstract | Links | BibTeX | Tags: Lexical Chains, Story Segmentation, Transcriptions, Video Retrieval, Word Embedding
@conference{chifu2016segchainw2v,
title = {Segchainw2v: Towards a generic automatic video segmentation framework, based on lexical chains of audio transcriptions and word embeddings},
author = {Adrian-Gabriel Chifu and Sébastien Fournier},
url = {https://reader.elsevier.com/reader/sd/pii/S1877050916319925?token=8EF351081CC26980139265A58715C3CD59C8D8708A523107D6B67DA85DBFBD02E9D644FBA6DEEA469D14B5B3E6D0BC24},
year = {2016},
date = {2016-09-01},
urldate = {2016-01-01},
booktitle = {20th International Conference on Knowledge Based and Intelligent Information and Engineering Systems},
journal = {Procedia Computer Science},
volume = {96},
pages = {1371--1380},
publisher = {Elsevier},
series = {KES2016},
abstract = {With the advances in multimedia broadcasting through a rich variety of channels and with the vulgarization of video production, it becomes essential to be able to provide reliable means of retrieving information within videos, not only the videos themselves. Research in this area has been widely focused on the context of TV news broadcasts, for which the structure itself provides clues for story segmentation. The systematic employment of these clues would lead to thematically driven systems that would not be easily adaptable in the case of videos of other types. The systems are therefore dependent on the type of videos for which they have been designed. In this paper we aim at introducing SegChainW2V, a generic unsupervised framework for story segmentation, based on lexical chains from transcriptions and their vectorization. SegChainW2V takes into account the topic changes by perceiving the fiuctuations of the most frequent terms throughout the video, as well as their semantics through the word embedding vectorization.},
keywords = {Lexical Chains, Story Segmentation, Transcriptions, Video Retrieval, Word Embedding},
pubstate = {published},
tppubtype = {conference}
}
With the advances in multimedia broadcasting through a rich variety of channels and with the vulgarization of video production, it becomes essential to be able to provide reliable means of retrieving information within videos, not only the videos themselves. Research in this area has been widely focused on the context of TV news broadcasts, for which the structure itself provides clues for story segmentation. The systematic employment of these clues would lead to thematically driven systems that would not be easily adaptable in the case of videos of other types. The systems are therefore dependent on the type of videos for which they have been designed. In this paper we aim at introducing SegChainW2V, a generic unsupervised framework for story segmentation, based on lexical chains from transcriptions and their vectorization. SegChainW2V takes into account the topic changes by perceiving the fiuctuations of the most frequent terms throughout the video, as well as their semantics through the word embedding vectorization.
TRANSLATE with x
English
TRANSLATE with
Enable collaborative features and customize widget: Bing Webmaster Portal