完整版基于Python的社交网站数据挖掘与数据分析 配套数据源和源代码.rar

上传者: 27595745 | 上传时间: 2021-08-08 17:09:28 | 文件大小: 803KB | 文件类型: RAR
# -*- coding: utf-8 -*- import sys import nltk import json # Load in output from blogs_and_nlp__get_feed.py BLOG_DATA = sys.argv[1] blog_data = json.loads(open(BLOG_DATA).read()) for post in blog_data: sentences = nltk.tokenize.sent_tokenize(post['content']) tokens = [nltk.tokenize.word_tokenize(s) for s in sentences] pos_tagged_tokens = [nltk.pos_tag(t) for t in tokens] # Flatten the list since we're not using sentence structure # and sentences are guaranteed to be separated by a special # POS tuple such as ('.', '.') pos_tagged_tokens = [token for sent in pos_tagged_tokens for token in sent] all_entity_chunks = [] previous_pos = None current_entity_chunk = [] for (token, pos) in pos_tagged_tokens: if pos == previous_pos and pos.startswith('NN'): current_entity_chunk.append(token) elif pos.startswith('NN'): if current_entity_chunk != []: # Note that current_entity_chunk could be a duplicate when appended, # so frequency analysis again becomes a consideration all_entity_chunks.append((' '.join(current_entity_chunk), pos)) current_entity_chunk = [token] previous_pos = pos # Store the chunks as an index for the document # and account for frequency while we're at it... post['entities'] = {} for c in all_entity_chunks: post['entities'][c] = post['entities'].get(c, 0) + 1 # For example, we could display just the title-cased entities print post['title'] print '-' * len(post['title']) proper_nouns = [] for (entity, pos) in post['entities']: if entity.istitle(): print '\t%s (%s)' % (entity, post['entities'][(entity, pos)]) print

文件下载

资源详情

[{"title":"( 134 个子文件 803KB ) 完整版基于Python的社交网站数据挖掘与数据分析 配套数据源和源代码.rar","children":[{"title":".gitignore <span style='color:#111;'> 33B </span>","children":null,"spread":false},{"title":"linkedin__analyze_companies.py <span style='color:#111;'> 786B </span>","children":null,"spread":false},{"title":"mailboxes__sort_by_value_in_another_db.py <span style='color:#111;'> 985B </span>","children":null,"spread":false},{"title":"the_tweet__avg_hashtags_per_tweet.py <span style='color:#111;'> 2.44KB </span>","children":null,"spread":false},{"title":"blogs_and_nlp__get_feed.py <span style='color:#111;'> 991B </span>","children":null,"spread":false},{"title":"......","children":null,"spread":false},{"title":"<span style='color:steelblue;'>文件过多,未全部展示</span>","children":null,"spread":false}],"spread":true}]

评论信息

  • haibeihome :
    用户下载后在一定时间内未进行评价,系统默认好评。
    2021-08-15

免责申明

【只为小站】的资源来自网友分享,仅供学习研究,请务必在下载后24小时内给予删除,不得用于其他任何用途,否则后果自负。基于互联网的特殊性,【只为小站】 无法对用户传输的作品、信息、内容的权属或合法性、合规性、真实性、科学性、完整权、有效性等进行实质审查;无论 【只为小站】 经营者是否已进行审查,用户均应自行承担因其传输的作品、信息、内容而可能或已经产生的侵权或权属纠纷等法律责任。
本站所有资源不代表本站的观点或立场,基于网友分享,根据中国法律《信息网络传播权保护条例》第二十二条之规定,若资源存在侵权或相关问题请联系本站客服人员,zhiweidada#qq.com,请把#换成@,本站将给予最大的支持与配合,做到及时反馈和处理。关于更多版权及免责申明参见 版权及免责申明