IRMA-International.org: Creator of Knowledge
Information Resources Management Association
Advancing the Concepts & Practices of Information Resources Management in Modern Organizations

Disambiguation and Filtering Methods in Using Web Knowledge for Coreference Resolution

Disambiguation and Filtering Methods in Using Web Knowledge for Coreference Resolution
View Sample PDF
Author(s): Olga Uryupina (University of Trento, Italy), Massimo Poesio (University of Trento, Italy & University of Essex, UK), Claudio Giuliano (University of Trento, Italy & Fondazione Bruno Kessler, Italy)and Kateryna Tymoshenko (University of Trento, Italy & Fondazione Bruno Kessler, Italy)
Copyright: 2012
Pages: 17
Source title: Cross-Disciplinary Advances in Applied Natural Language Processing: Issues and Approaches
Source Author(s)/Editor(s): Chutima Boonthum-Denecke (Hampton University, USA), Philip M. McCarthy (The University of Memphis, USA)and Travis Lamkin (University of Memphis, USA)
DOI: 10.4018/978-1-61350-447-5.ch013

Purchase

View Disambiguation and Filtering Methods in Using Web Knowledge for Coreference Resolution on the publisher's website for pricing and purchasing information.

Abstract

The authors investigate two publicly available Web knowledge bases, Wikipedia and Yago, in an attempt to leverage semantic information and increase the performance level of a state-of-the-art coreference resolution engine. They extract semantic compatibility and aliasing information from Wikipedia and Yago, and incorporate it into a coreference resolution system. The authors show that using such knowledge with no disambiguation and filtering does not bring any improvement over the baseline, mirroring the previous findings (Ponzetto & Poesio, 2009). They propose, therefore, a number of solutions to reduce the amount of noise coming from Web resources: using disambiguation tools for Wikipedia, pruning Yago to eliminate the most generic categories and imposing additional constraints on affected mentions. The evaluation experiments on the ACE-02 corpus show that the knowledge, extracted from Wikipedia and Yago, improves the system’s performance by 2-3 percentage points.

Related Content

Reinaldo Padilha França, Ana Carolina Borges Monteiro, Rangel Arthur, Yuzo Iano. © 2021. 21 pages.
Abdul Kader Saiod, Darelle van Greunen. © 2021. 28 pages.
Aswini R., Padmapriya N.. © 2021. 22 pages.
Zubeida Khan, C. Maria Keet. © 2021. 21 pages.
Neha Gupta, Rashmi Agrawal. © 2021. 20 pages.
Kamalendu Pal. © 2021. 14 pages.
Joy Nkechinyere Olawuyi, Bernard Ijesunor Akhigbe, Babajide Samuel Afolabi, Attoh Okine. © 2021. 19 pages.
Body Bottom