朴素贝叶斯算法实战 email邮件数据集,SogouC新闻数据集 Email_NB.py垃圾邮件过滤实现(Python3实现) Naive_Bay.py 朴素贝叶斯算法实现(Python3实现) 样本比较小,成功率大概为90% import numpy as np from functools import reduce """ 函数说明:创建实验样本 Parameters: 无 Returns: postingList - 实验样本切分的词条 classVec - 类别标签向量 """ def loadDataSet(): postingList=[['my', 'dog', 'has', 'flea', 'problems', 'help', 'please'], #切分的词条 ['maybe', 'not', 'take', 'him', 'to', 'dog', 'park', 'stupid'], ['my', 'dalmation', 'is', 'so',
2023-04-06 12:03:35 196KB 贝叶斯
1
机器学习中搜狗实验室发布的搜狗新闻数据集
2022-11-02 01:58:32 366.47MB 数据集 搜狗新闻
1
Sogou News Dataset 是由 SogouCA 和 SogouCS 新闻语料库构成的数据集,其拥有 5 个类别共计 2,909,551 篇文章,每个类别均包含 90,000 个训练样本和 12,000 个测试样本,并且这些样本均以转换为拼音。
2022-07-13 16:05:16 366.5MB 数据集
THUCNews 数据集是根据新浪新闻 2005~2011 年间的历史数据筛选过滤生成,包含 74 万篇新闻文档,均为 UTF-8 纯文本格式。此数据集在原始新浪新闻分类体系的基础上,重新整合划分出 14 个候选分类类别:财经、彩票、房产、股票、家居、教育、科技、社会、时尚、时政、体育、星座、游戏、娱乐
2022-07-13 16:05:05 1.45GB 数据集
新闻数据集文本分类实战源代码。通过学习 深度学习框架-PyTorch实战 用中文命名改写代码,使用前请添加形参 模型文件名。使用中文命名编程新闻数据集文本分类,新闻数据集文本分类实战源代码。通过学习 深度学习框架-PyTorch实战 用中文命名改写代码,使用前请添加形参 模型文件名。使用中文命名编程新闻数据集文本分类,新闻数据集文本分类实战源代码。通过学习 深度学习框架-PyTorch实战 用中文命名改写代码,使用前请添加形参 模型文件名。使用中文命名编程新闻数据集文本分类,新闻数据集文本分类实战源代码。通过学习 深度学习框架-PyTorch实战 用中文命名改写代码,使用前请添加形参 模型文件名。使用中文命名编程新闻数据集文本分类,新闻数据集文本分类实战源代码。通过学习 深度学习框架-PyTorch实战 用中文命名改写代码,使用前请添加形参 模型文件名。使用中文命名编程新闻数据集文本分类,
2022-06-07 09:12:15 15.92MB python 分类 pytorch 人工智能
新闻数据集文本分类实战
2022-05-19 15:30:16 29.27MB 分类 数据挖掘 人工智能 机器学习
1
496,835 条来自 AG 新闻语料库 4 大类别超过 2000 个新闻源的新闻文章,数据集仅仅援用了标题和描述字段。每个类别分别拥有 30,000 个训练样本及 1900 个测试样本。 README: AG's News Topic Classification Dataset Version 3, Updated 09/09/2015 ORIGIN AG is a collection of more than 1 million news articles. News articles have been gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of activity. ComeToMyHead is an academic news search engine which has been running since July, 2004. The dataset is provided by the academic comunity for research purposes in data mining (clustering, classification, etc), information retrieval (ranking, search, etc), xml, data compression, data streaming, and any other non-commercial activity. For more information, please refer to the link http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html . The AG's news topic classification dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu) from the dataset above. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015). DESCRIPTION The AG's news topic classification dataset is constructed by choosing 4 largest classes from the original corpus. Each class contains 30,000 training samples and 1,900 testing samples. The total number of training samples is 120,000 and testing 7,600. The file classes.txt contains a list of classes corresponding to each label. The files train.csv and test.csv contain all the training samples as comma-sparated values. There are 3 columns in them, corresponding to class index (1 to 4), title and description. The title and description are escaped using double quotes ("), and any internal double quote is escaped by 2 double quotes (""). New lines are escaped by a backslash followed with an "n" character, that is "\n".
2022-01-23 12:58:33 11.24MB 分类任务 AGnews 新闻数据集
1
头条中文新闻数据集(来源:https://github.com/aceimnorstuvwxz/toutiao-text-classfication-dataset),已按照8:1:1的比例划分为训练集、测试集、验证集,并将格式整理为 新闻内容 + '\t' + 新闻标签 + '\n'的形式,可直接利用AI Studio训练模型
1
496,835 条来自 AG 新闻语料库 4 大类别超过 2000 个新闻源的新闻文章,数据集仅仅援用了标题和描述字段。每个类别分别拥有 30,000 个训练样本及 1900 个测试样本。 README: AG's News Topic Classification Dataset Version 3, Updated 09/09/2015 ORIGIN AG is a collection of more than 1 million news articles. News articles have been gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of activity. ComeToMyHead is an academic news search engine which has been running since July, 2004. The dataset is provided by the academic comunity for research purposes in data mining (clustering, classification, etc), information retrieval (ranking, search, etc), xml, data compression, data streaming, and any other non-commercial activity. For more information, please refer to the link http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html . The AG's news topic classification dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu) from the dataset above. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015). DESCRIPTION The AG's news topic classification dataset is constructed by choosing 4 largest classes from the original corpus. Each class contains 30,000 training samples and 1,900 testing samples. The total number of training samples is 120,000 and testing 7,600. The file classes.txt contains a list of classes corresponding to each label. The files train.csv and test.csv contain all the training samples as comma-sparated values. There are 3 columns in them, corresponding to class index (1 to 4), title and description. The title and description are escaped using double quotes ("), and any internal double quote is escaped by 2 double quotes (""). New lines are escaped by a backslash followed with an "n" character, that is "\n".
2021-07-29 11:51:08 11.24MB 分类任务 AGnews 新闻数据集
1
机器学习算法中自然语言处理常用数据集(新闻数据集news.csv)及jieba_dict字典、停用词等相关文件,包括以下文件 data/news.csv jieba_dict/dict.txt.big jieba_dict/stopwords.txt jieba_dict/stopwords_s.txt
2021-07-19 15:41:33 3.94MB 新闻数据集 自然语言处理数据集
1