首页 » 网站建设 » strrposphp技巧_爬虫斗鱼颜值频道

strrposphp技巧_爬虫斗鱼颜值频道

访客 2024-11-19 0

扫一扫用手机浏览

文章目录 [+]

1.抓包剖析

这次利用手机抓包剖析,抓包工具Charles

strrposphp技巧_爬虫斗鱼颜值频道

听说做app测试或者接口测试都须要利用到抓包工具(没做过,不晓得)

strrposphp技巧_爬虫斗鱼颜值频道
(图片来自网络侵删)

网上抓包工具的配置教程很多例如https://www.jianshu.com/p/5539599c7a25

配置好之后在手机上打开斗鱼APP找到颜值频道

清空Charles里面的内容

下拉刷新当前页面,然后下翻几页

删除Charles上与douyu域名无关的信息

大概留下如图内容:

逐条查看一下找到有主播名称的内容

https://apiv2.douyucdn.cn/gv2api/rkc/roomlist/2_201/0/20/ios?client_sys=ios

https://apiv2.douyucdn.cn/gv2api/rkc/roomlist/2_201/20/20/ios?client_sys=ios

https://apiv2.douyucdn.cn/gv2api/rkc/roomlist/2_201/40/20/ios?client_sys=ios

https://apiv2.douyucdn.cn/gv2api/rkc/roomlist/2_201/60/20/ios?client_sys=ios

个中唯一差异是/xx/20/ios?client_sys=ios 合理推断这是页数

从json里面创造几个比较关键的内容

房间id:room_id

房间名字:room_name

主播名字:nickname

主播封面:vertical_src

主播城市:anchor_city

{ \公众roomRule\"大众: 0, \公众msg\"大众: \公众\"大众, \"大众list\"大众: [{ \"大众room_id\公众: 2450462, \"大众room_name\公众: \"大众【第二萌】一日不见,如隔三秋\"大众, \公众nickname\"大众: \"大众南京第二萌\"大众, \"大众cate_id\公众: 311, \"大众room_src\"大众: \公众https://rpic.douyucdn.cn/live-cover/appCovers/2017/12/11/2450462_20171211203916_small.jpg\公众, \公众is_vertical\"大众: 0, \"大众vertical_src\公众: \"大众https://rpic.douyucdn.cn/live-cover/appCovers/2017/12/11/2450462_20171211203916_big.jpg\公众, \"大众online_num\公众: 63, \"大众hn\"大众: 35690, \"大众show_status\"大众: 1, \"大众bid_id\"大众: 0, \"大众bidToken\"大众: \公众\公众, \"大众rpos\"大众: 0, \公众rankType\"大众: 0, \公众recomType\公众: 0, \"大众show_id\"大众: \"大众81327515\"大众, \"大众iho\"大众: 0, \"大众guild_id\"大众: 0, \公众topid\"大众: 0, \公众chanid\"大众: 0, \公众jump_url\"大众: \"大众\"大众, \"大众client_sys\公众: 1, \"大众is_noble_rec\"大众: 0, \"大众noble_rec_user_id\"大众: 0, \"大众noble_rec_nickname\公众: \"大众\"大众, \公众anchor_city\"大众: \公众南京市\"大众, \"大众rmf1\"大众: 0, \"大众rmf2\公众: 0, \"大众rmf3\公众: 0, \公众ofc\"大众: 0, \公众lhl\"大众: 0, \"大众chgd\"大众: 0, \"大众has_al\公众: 1, \公众anchor_label\公众: [{ \公众tag\公众: \"大众摸你奖杯\"大众, \"大众id\公众: 92681 }, { \"大众tag\"大众: \"大众大哥纹身\"大众, \"大众id\"大众: 79659 }, { \"大众tag\公众: \"大众大哥烫我\公众, \"大众id\"大众: 4912 }, { \公众tag\"大众: \"大众王二怂\公众, \"大众id\"大众: 96525 }], \"大众icon_url\公众: \"大众\公众, \公众nly\公众: 0 }

2.代码编写

先创建两个url拼接,中间可以放页数的字符

self.url_1 = 'https://apiv2.douyucdn.cn/gv2api/rkc/roomlist/2_201/'self.url_2 = '/20/ios?client_sys=ios'

由于是用的手机端,为了防止涌现什么问题,头部信息利用手机的agent

self.HEADERS={'User-Agent':'ios/3.700 (ios 11.2.6; ; iPhone X (A1865/A1902))'}

先写一个获取刚才剖析了须要得到的信息的函数

import json来对json内容进行转换

函数接管页数的传入

1.进行url的拼接

2.进行json的转换,转化成为python的字典格式

3.对转换后的字典取值

def get_message(self,page): page = (page-1)20 url = self.url_1+str(page)+self.url_2 res = requests.get(url = url,headers = self.HEADERS) message_json = json.loads(res.text) message_data = message_json['data'] if not message_data: return message_lists = message_data['list'] print('正在爬取第%s页' % int(page/20+1)) for message in message_lists: item = {} item['房间id'] = message['room_id'] item['房间名字'] = message['room_name'] item['主播名字'] = message['nickname'] item['主播封面'] = message['vertical_src'] item['主播城市'] = message['anchor_city'] self.item_lists.append(item) self.download_pic(item)

将取出来的值传入下载图片的函数中

1.利用图片的url进行访问并转换成二进制.content

2.为了不让图片零零散散,新建一个/img文件存放图片

3.利用wb写入二进制

def download_pic(self,item): content = requests.get(url = item['主播封面'],headers = self.HEADERS).content File_Path = os.getcwd() + '/img' if not os.path.exists(File_Path): os.makedirs(File_Path) with open('img/房间ID:%s---来自%s的%s.jpg' %(item['房间id'],item['主播城市'],item['主播名字']),'wb') as f: f.write(content)

按照老例利用进程池来加快下载速率

if __name__ == '__main__': pool = Pool() message = DouYuYanZhi() print() for i in range(1,20): pool.apply_async(message.get_message,args=(i,)) pool.close() pool.join()

全代码如下:

#!/usr/bin/env python# -- coding:utf-8 --#Author: zhongxinfrom multiprocessing import Poolimport requestsimport jsonimport osclass DouYuYanZhi(): def __init__(self): self.item_lists = [] self.url_1 = 'https://apiv2.douyucdn.cn/gv2api/rkc/roomlist/2_201/' self.url_2 = '/20/ios?client_sys=ios' self.HEADERS={'User-Agent':'ios/3.700 (ios 11.2.6; ; iPhone X (A1865/A1902))'} def get_message(self,page): page = (page-1)20 url = self.url_1+str(page)+self.url_2 res = requests.get(url = url,headers = self.HEADERS) message_json = json.loads(res.text) message_data = message_json['data'] if not message_data: return message_lists = message_data['list'] print('正在爬取第%s页' % int(page/20+1)) for message in message_lists: item = {} item['房间id'] = message['room_id'] item['房间名字'] = message['room_name'] item['主播名字'] = message['nickname'] item['主播封面'] = message['vertical_src'] item['主播城市'] = message['anchor_city'] self.item_lists.append(item) self.download_pic(item) def download_pic(self,item): content = requests.get(url = item['主播封面'],headers = self.HEADERS).content File_Path = os.getcwd() + '/img' if not os.path.exists(File_Path): os.makedirs(File_Path) with open('img/房间ID:%s---来自%s的%s.jpg' %(item['房间id'],item['主播城市'],item['主播名字']),'wb') as f: f.write(content)if __name__ == '__main__': pool = Pool() message = DouYuYanZhi() print() for i in range(1,20): pool.apply_async(message.get_message,args=(i,)) pool.close() pool.join()

运行结果:

标签:

相关文章