用Python收集用1.1W弹幕,做词云分析~
2022-03-31 本文已影响0人
颜狗一只
前言
嗨喽!大家好呀,这里是魔王
今天给大家带来某剧得弹幕采集并制作词云图,希望大家喜欢并多多支持我呀~
环境介绍
- python 3.8
- pycharm
使用模块
- requests >>> pip install requests
- pyecharts >>> pip install pyecharts
视频弹幕收集
# 请求数据
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.74 Safari/537.36'
}
for page in range(15, 1500, 30):
url = f'https://mfm.XXXX.com/danmu?otype=json&target_id=7712618480%26vid%3Dg00423lkmas&session_key=0%2C0%2C0×tamp={page}&_=1647931110703'
response = requests.get(url=url, headers=headers)
# 获取数据 从一个字符串 变成了一个 字典 (容器)
json_data = response.json()
# 解析数据
for comment in json_data['comments']:
commentid = comment['commentid']
opername = comment['opername']
content = comment['content']
# 保存数据
with open('弹幕.csv', encoding='utf-8-sig', mode='a', newline='') as f:
csv_writer = csv.writer(f)
csv_writer.writerow([commentid, opername, content])
运行代码,得到1W多条弹幕数据
image.png词云可视化
导入数据
wordlist = []
data = pd.read_csv('弹幕.csv')['content']
data
词云图
a = [list(z) for z in zip(word, count)]
c = (
WordCloud()
.add('', a, word_size_range=[10, 50], shape='circle')
.set_global_opts(title_opts=opts.TitleOpts(title="词云图"))
)
c.render_notebook()
尾语
好了,我的这篇文章写到这里就结束啦!
有更多建议或问题可以评论区或私信我哦!一起加油努力叭(ง •_•)ง
喜欢就关注一下博主,或点赞收藏评论一下我的文章叭!!!