DeepMind Q&A Dataset: CNN/Daily Mail Stories 文本摘要数据集 百度云 https://cs.nyu.edu/~kcho/DMQA/
2021-09-30 16:09:19 67B cnn Daily Mail 文本摘要
1
解压后即,cnn_stories.tgz 和 dailymail_stories.tgz,它们是做文本摘要需要的数据集,来源于https://cs.nyu.edu/~kcho/DMQA/。
1
由哈工大某实验室整理得到的文本摘要数据集
2021-07-05 09:01:38 416.3MB 文本摘要 数据集 自然语言处理
1
文本摘要数据集 CNN与Daily News 未经过处理的原始数据集。
1
文件有训练测试以及验证,格式为.bin,FINISHED FIES
1
文档中包含网盘的地址,数据共319M NLP方向文本摘要,文本分类,等方向可采纳! The LCSTS dataset includes two parts: /DATA: 1. PART I: is the main contents of LCSTS that contains 2,400,591 (short text, summary) pairs. It can be used to train supervised learning models for summary generation. 2. PART II: contains 10,666 human labled (short text, summary) pairs which can be used to train classifier to filter the noises of the PART I. 3. PART III: contains 1,106 (short text, summary) pairs, this part is labled by 3 persons with the same labels. These pairs with score 3,4 and 5 can be used as test set for evaluating summary generation systems. /Result: 1.sumary.generated.char.context.txt: contains the summary generated by using RNN+context on the character based input. 2.sumary.generated.char.nocontext.txt: contains the summary generated by using RNN+nocontext on the character based input. 3.sumary.generated.word.context.txt: contains the summary generated by using RNN+context on the word based input. 4.sumary.generated.word.nocontext.txt: contains the summary generated by using RNN+nocontext on the word based input. 5.weibo.txt: contains the weibo of the test set. 6.sumary.human: contains the sumaries corresponding to 'weibo.txt' written by human. This part is the test set of the paper. 7. rouge.char_context.txt: the rouge metric on sumary.generated.char.context 8. rouge.char_nocontext.txt:the rouge metric on sumary.generated.char.nocontext 9. rouge.word_context.txt: the rouge metric on sumary.generated.word.context 10. rouge.word_nocontext.txt:the rouge metric on sumary.generated.word.nocontext
2019-12-21 19:26:22 66B nlp
1