Skip navigation
  • 中文
  • English

DSpace CRIS

  • DSpace logo
  • 首頁
  • 研究成果檢索
  • 研究人員
  • 單位
  • 計畫
  • 分類瀏覽
    • 研究成果檢索
    • 研究人員
    • 單位
    • 計畫
  • 機構典藏
  • SDGs
  • 登入
  • 中文
  • English
  1. National Taiwan Ocean University Research Hub
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://scholars.ntou.edu.tw/handle/123456789/17892
標題: Word segmentation refinement by Wikipedia for textual entailment
作者: Chuan-Jie Lin 
Yu-Cheng Tu
關鍵字: Encyclopedias;Electronic publishing;Internet;Training;Numerical models;Benchmark testing
公開日期: 13-八月-2014
出版社: IEEE
摘要: 
Textual entailment in Chinese differs from the way handling English because of the lack of word delimiters and capitalization. Information from word segmentation and Wikipedia often plays an important role in textual entailment recognition. However, the inconsistency of boundaries of word segmentation and matched Wikipedia titles should be resolved first. This paper proposed 4 ways to incorporate Wikipedia title matching and word segmentation, experimented in several feature combinations. The best system redoes word segmentation after matching Wikipedia titles. The best feature combination for BC task uses content words and Wikipedia titles only, which achieves a macro-average F-measure of 67.33% and an accuracy of 68.9%. The best MC RITE system also achieves a macro-average F-measure of 46.11% and an accuracy of 58.34%. They beat all the runs in NTCIR-10 RITE-2 CT tasks.
URI: http://scholars.ntou.edu.tw/handle/123456789/17892
DOI: 10.1109/IRI.2014.7051944
顯示於:資訊工程學系

顯示文件完整紀錄

Page view(s)

90
上周
0
上個月
0
checked on 2025/6/30

Google ScholarTM

檢查

Altmetric

Altmetric

TAIR相關文章


在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

瀏覽
  • 機構典藏
  • 研究成果檢索
  • 研究人員
  • 單位
  • 計畫
DSpace-CRIS Software Copyright © 2002-  Duraspace   4science - Extension maintained and optimized by NTU Library Logo 4SCIENCE 回饋