关键词搜索

源码搜索 ×
×

2021-09-27

发布2021-09-27浏览330次

详情内容

辅助视频教程:Python基础教程|xin3721自学网ul li id=itemtitlePython3 从入门到精通视频教程/li /ul ul li class=description Python是一种跨平台的计算机程序设计语言。是一种面向对象的动态类型语言,最初被设计用于编写自动化脚本(shell),icon-default.png?t=L892https://www.xin3721.com/eschool/pythonxin3721/

Requests 是一个 Python 的 HTTP 客户端库。

Request支持HTTP连接保持和连接池,支持使用cookie保持会话,支持文件上传,支持自动响应内容的编码,支持国际化的URL和POST数据自动编码。

在python内置模块的基础上进行了高度的封装从而使得python进行网络请求时,变得人性化,使用Requests可以轻而易举的完成浏览器可有的任何操作。现代,国际化,友好

requests会自动实现持久连接keep-alive

开源地址:https://github.com/kennethreitz/requests
中文文档:http://docs.python-requests.org/zh_CN/latest/index.html

目录
一、Requests基础
二、发送请求与接收响应(基本GET请求)
三、发送请求与接收响应(基本POST请求)
四、response属性
五、代理
六、cookie和session
七、案例

一、Requests基础

1.安装Requests库

pip install  requests

2.使用Requests库

import requests

二、发送请求与接收响应(基本GET请求)

  1. response = requests.get(url)

1.传送 parmas参数

  • 参数包含在url中
  1. response = requests.get("http://httpbin.org/get?name=zhangsan&age=22")
  2. print(response.text)

image

  • 通过get方法传送参数
  1. data = {
  2. "name": "zhangsan",
  3. "age": 30
  4. }
  5. response = requests.get("http://httpbin.org/get", params=data)
  6. print(response.text)

2.模拟发送请求头(传送headers参数)

  1. headers = {
  2. "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36"
  3. }
  4. response = requests.get("http://httpbin.org/get", headers=headers)
  5. print(response.text)

三、发送请求与接收响应(基本POST请求)

  1. response = requests.post(url, data = data, headers=headers)

四、response属性

属性描述
response.text获取str类型(Unicode编码)的响应
response.content获取bytes类型的响应
response.status_code获取响应状态码
response.headers获取响应头
response.request获取响应对应的请求

五、代理

  1. proxies = {
  2. "http": "https://175.44.148.176:9000",
  3. "https": "https://183.129.207.86:14002"
  4. }
  5. response = requests.get("https://www.baidu.com/", proxies=proxies)

六、cookie和session

  • 使用的cookie和session好处:很多网站必须登录之后(或者获取某种权限之后)才能能够请求到相关数据。
  • 使用的cookie和session的弊端:一套cookie和session往往和一个用户对应.请求太快,请求次数太多,容易被服务器识别为爬虫,从而使账号收到损害。

1.不需要cookie的时候尽量不去使用cookie。
2.为了获取登录之后的页面,我们必须发送带有cookies的请求,此时为了确保账号安全应该尽量降低数据
采集速度。

1.cookie

(1)获取cookie信息

  1. response.cookies

2.session

(1)构造session回话对象

  1. session = requests.session()

示例:

  1. def login_renren():
  2. login_url = 'http://www.renren.com/SysHome.do'
  3. headers = {
  4. "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36"
  5. }
  6. session = requests.session()
  7. login_data = {
  8. "email": "账号",
  9. "password": "密码"
  10. }
  11. response = session.post(login_url, data=login_data, headers=headers)
  12. response = session.get("http://www.renren.com/971909762/newsfeed/photo")
  13. print(response.text)
  14. login_renren()

七、案例

image

案例1:百度贴吧页面爬取(GET请求)

  1. import requests
  2. import sys
  3. class BaiduTieBa:
  4. def __init__(self, name, pn, ):
  5. self.name = name
  6. self.url = "http://tieba.baidu.com/f?kw={}&ie=utf-8&pn={}".format(name, pn)
  7. self.headers = {
  8. # "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36"
  9. # 使用较老版本的请求头,该浏览器不支持js
  10. "User-Agent": "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)"
  11. }
  12. self.url_list = [self.url + str(pn*50) for pn in range(pn)]
  13. print(self.url_list)
  14. def get_data(self, url):
  15. """
  16. 请求数据
  17. :param url:
  18. :return:
  19. """
  20. response = requests.get(url, headers=self.headers)
  21. return response.content
  22. def save_data(self, data, num):
  23. """
  24. 保存数据
  25. :param data:
  26. :param num:
  27. :return:
  28. """
  29. file_name = "./pages/" + self.name + "_" + str(num) + ".html"
  30. with open(file_name, "wb") as f:
  31. f.write(data)
  32. def run(self):
  33. for url in self.url_list:
  34. data = self.get_data(url)
  35. num = self.url_list.index(url)
  36. self.save_data(data, num)
  37. if __name__ == "__main__":
  38. name = sys.argv[1]
  39. pn = int(sys.argv[2])
  40. baidu = BaiduTieBa(name, pn)
  41. baidu.run()

案例2:金山词霸翻译(POST请求)

  1. import requests
  2. import sys
  3. import json
  4. class JinshanCiBa:
  5. def __init__(self, words):
  6. self.url = "http://fy.iciba.com/ajax.php?a=fy"
  7. self.headers = {
  8. "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 SE 2.X MetaSr 1.0",
  9. "X-Requested-With": "XMLHttpRequest"
  10. }
  11. self.post_data = {
  12. "f": "auto",
  13. "t": "auto",
  14. "w": words
  15. }
  16. def get_data(self):
  17. """
  18. 请求数据
  19. :param url:
  20. :return:
  21. """
  22. response = requests.post(self.url, data=self.post_data, headers=self.headers)
  23. return response.text
  24. def show_translation(self):
  25. """
  26. 显示翻译结果
  27. :param data:
  28. :param num:
  29. :return:
  30. """
  31. response = self.get_data()
  32. json_data = json.loads(response, encoding='utf-8')
  33. if json_data['status'] == 0:
  34. translation = json_data['content']['word_mean']
  35. elif json_data['status'] == 1:
  36. translation = json_data['content']['out']
  37. else:
  38. translation = None
  39. print(translation)
  40. def run(self):
  41. self.show_translation()
  42. if __name__ == "__main__":
  43. words = sys.argv[1]
  44. ciba = JinshanCiBa(words)
  45. ciba.run()

在这里插入图片描述

案例3:百度贴吧图片爬取

(1)普通版

从已下载页面中提取url来爬取图片(页面下载方法见案例1)

  1. from lxml import etree
  2. import requests
  3. class DownloadPhoto:
  4. def __init__(self):
  5. pass
  6. def download_img(self, url):
  7. response = requests.get(url)
  8. index = url.rfind('/')
  9. file_name = url[index + 1:]
  10. print("下载图片:" + file_name)
  11. save_name = "./photo/" + file_name
  12. with open(save_name, "wb") as f:
  13. f.write(response.content)
  14. def parse_photo_url(self, page):
  15. html = etree.parse(page, etree.HTMLParser())
  16. nodes = html.xpath("//a[contains(@class, 'thumbnail')]/img/@bpic")
  17. print(nodes)
  18. print(len(nodes))
  19. for node in nodes:
  20. self.download_img(node)
  21. if __name__ == "__main__":
  22. download = DownloadPhoto()
  23. for i in range(6000):
  24. download.parse_photo_url("./pages/校花_{}.html".format(i))

(2)多线程版

main.py

  1. import requests
  2. from lxml import etree
  3. from file_download import DownLoadExecutioner, file_download
  4. class XiaoHua:
  5. def __init__(self, init_url):
  6. self.init_url = init_url
  7. self.download_executioner = DownLoadExecutioner()
  8. def start(self):
  9. self.download_executioner.start()
  10. self.download_img(self.init_url)
  11. def download_img(self, url):
  12. html_text = file_download(url, type='text')
  13. html = etree.HTML(html_text)
  14. img_urls = html.xpath("//a[contains(@class,'thumbnail')]/img/@bpic")
  15. self.download_executioner.put_task(img_urls)
  16. # 获取下一页的连接
  17. next_page = html.xpath("//div[@id='frs_list_pager']/a[contains(@class,'next')]/@href")
  18. next_page = "http:" + next_page[0]
  19. self.download_img(next_page)
  20. if __name__ == '__main__':
  21. x = XiaoHua("http://tieba.baidu.com/f?kw=校花&ie=utf-8")
  22. x.start()

file_download.py

  1. import requests
  2. import threading
  3. from queue import Queue
  4. def file_download(url, type='content'):
  5. headers = {
  6. 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko'
  7. }
  8. r = requests.get(url, headers=headers)
  9. if type == 'text':
  10. return r.text
  11. return r.content
  12. class DownLoadExecutioner(threading.Thread):
  13. def __init__(self):
  14. super().__init__()
  15. self.q = Queue(maxsize=50)
  16. # 图片保存目录
  17. self.save_dir = './img/'
  18. # 图片计数
  19. self.index = 0
  20. def put_task(self, urls):
  21. if isinstance(urls, list):
  22. for url in urls:
  23. self.q.put(url)
  24. else:
  25. self.q.put(urls)
  26. def run(self):
  27. while True:
  28. url = self.q.get()
  29. content = file_download(url)
  30. # 截取图片名称
  31. index = url.rfind('/')
  32. file_name = url[index+1:]
  33. save_name = self.save_dir + file_name
  34. with open(save_name, 'wb+') as f:
  35. f.write(content)
  36. self.index += 1
  37. print(save_name + "下载成功! 当前已下载图片总数:" + str(self.index))

(3)线程池版

main.py

  1. import requests
  2. from lxml import etree
  3. from file_download_pool import DownLoadExecutionerPool, file_download
  4. class XiaoHua:
  5. def __init__(self, init_url):
  6. self.init_url = init_url
  7. self.download_executioner = DownLoadExecutionerPool()
  8. def start(self):
  9. self.download_img(self.init_url)
  10. def download_img(self, url):
  11. html_text = file_download(url, type='text')
  12. html = etree.HTML(html_text)
  13. img_urls = html.xpath("//a[contains(@class,'thumbnail')]/img/@bpic")
  14. self.download_executioner.put_task(img_urls)
  15. # 获取下一页的连接
  16. next_page = html.xpath("//div[@id='frs_list_pager']/a[contains(@class,'next')]/@href")
  17. next_page = "http:" + next_page[0]
  18. self.download_img(next_page)
  19. if __name__ == '__main__':
  20. x = XiaoHua("http://tieba.baidu.com/f?kw=校花&ie=utf-8")
  21. x.start()

file_download_pool.py

  1. import requests
  2. import concurrent.futures as futures
  3. def file_download(url, type='content'):
  4. headers = {
  5. 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko'
  6. }
  7. r = requests.get(url, headers=headers)
  8. if type == 'text':
  9. return r.text
  10. return r.content
  11. class DownLoadExecutionerPool():
  12. def __init__(self):
  13. super().__init__()
  14. # 图片保存目录
  15. self.save_dir = './img_pool/'
  16. # 图片计数
  17. self.index = 0
  18. # 线程池
  19. self.ex = futures.ThreadPoolExecutor(max_workers=30)
  20. def put_task(self, urls):
  21. if isinstance(urls, list):
  22. for url in urls:
  23. self.ex.submit(self.save_img, url)
  24. else:
  25. self.ex.submit(self.save_img, urls)
  26. def save_img(self, url):
  27. content = file_download(url)
  28. # 截取图片名称
  29. index = url.rfind('/')
  30. file_name = url[index+1:]
  31. save_name = self.save_dir + file_name
  32. with open(save_name, 'wb+') as f:
  33. f.write(content)
  34. self.index += 1
  35. print(save_name + "下载成功! 当前已下载图片总数:" + str(self.index))

作者:Recalcitrant
链接:https://www.jianshu.com/p/140012f88f8eRequests 是一个 Python 的 HTTP 客户端库。

Request支持HTTP连接保持和连接池,支持使用cookie保持会话,支持文件上传,支持自动响应内容的编码,支持国际化的URL和POST数据自动编码。
在python内置模块的基础上进行了高度的封装,从而使得python进行网络请求时,变得人性化,使用Requests可以轻而易举的完成浏览器可有的任何操作。现代,国际化,友好。

requests会自动实现持久连接keep-alive
![image](//upload-images.jianshu.io/upload_images/17476284-aaaae0326f700dc7.png)

开源地址:https://github.com/kennethreitz/requests
中文文档:http://docs.python-requests.org/zh_CN/latest/index.html

目录
一、Requests基础
二、发送请求与接收响应(基本GET请求)
三、发送请求与接收响应(基本POST请求)
四、response属性
五、代理
六、cookie和session
七、案例

一、Requests基础

1.安装Requests库

pip install  requests

2.使用Requests库

import requests

二、发送请求与接收响应(基本GET请求)

  1. response = requests.get(url)

1.传送 parmas参数

  • 参数包含在url中
  1. response = requests.get("http://httpbin.org/get?name=zhangsan&age=22")
  2. print(response.text)

image

  • 通过get方法传送参数
  1. data = {
  2. "name": "zhangsan",
  3. "age": 30
  4. }
  5. response = requests.get("http://httpbin.org/get", params=data)
  6. print(response.text)

2.模拟发送请求头(传送headers参数)

  1. headers = {
  2. "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36"
  3. }
  4. response = requests.get("http://httpbin.org/get", headers=headers)
  5. print(response.text)

三、发送请求与接收响应(基本POST请求)

  1. response = requests.post(url, data = data, headers=headers)

四、response属性

属性描述
response.text获取str类型(Unicode编码)的响应
response.content获取bytes类型的响应
response.status_code获取响应状态码
response.headers获取响应头
response.request获取响应对应的请求

五、代理

  1. proxies = {
  2. "http": "https://175.44.148.176:9000",
  3. "https": "https://183.129.207.86:14002"
  4. }
  5. response = requests.get("https://www.baidu.com/", proxies=proxies)

六、cookie和session

  • 使用的cookie和session好处:很多网站必须登录之后(或者获取某种权限之后)才能能够请求到相关数据。
  • 使用的cookie和session的弊端:一套cookie和session往往和一个用户对应.请求太快,请求次数太多,容易被服务器识别为爬虫,从而使账号收到损害。

1.不需要cookie的时候尽量不去使用cookie。
2.为了获取登录之后的页面,我们必须发送带有cookies的请求,此时为了确保账号安全应该尽量降低数据
采集速度。

1.cookie

(1)获取cookie信息

  1. response.cookies

2.session

(1)构造session回话对象

  1. session = requests.session()

示例:

  1. def login_renren():
  2. login_url = 'http://www.renren.com/SysHome.do'
  3. headers = {
  4. "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36"
  5. }
  6. session = requests.session()
  7. login_data = {
  8. "email": "账号",
  9. "password": "密码"
  10. }
  11. response = session.post(login_url, data=login_data, headers=headers)
  12. response = session.get("http://www.renren.com/971909762/newsfeed/photo")
  13. print(response.text)
  14. login_renren()

七、案例

image

案例1:百度贴吧页面爬取(GET请求)

  1. import requests
  2. import sys
  3. class BaiduTieBa:
  4. def __init__(self, name, pn, ):
  5. self.name = name
  6. self.url = "http://tieba.baidu.com/f?kw={}&ie=utf-8&pn={}".format(name, pn)
  7. self.headers = {
  8. # "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36"
  9. # 使用较老版本的请求头,该浏览器不支持js
  10. "User-Agent": "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)"
  11. }
  12. self.url_list = [self.url + str(pn*50) for pn in range(pn)]
  13. print(self.url_list)
  14. def get_data(self, url):
  15. """
  16. 请求数据
  17. :param url:
  18. :return:
  19. """
  20. response = requests.get(url, headers=self.headers)
  21. return response.content
  22. def save_data(self, data, num):
  23. """
  24. 保存数据
  25. :param data:
  26. :param num:
  27. :return:
  28. """
  29. file_name = "./pages/" + self.name + "_" + str(num) + ".html"
  30. with open(file_name, "wb") as f:
  31. f.write(data)
  32. def run(self):
  33. for url in self.url_list:
  34. data = self.get_data(url)
  35. num = self.url_list.index(url)
  36. self.save_data(data, num)
  37. if __name__ == "__main__":
  38. name = sys.argv[1]
  39. pn = int(sys.argv[2])
  40. baidu = BaiduTieBa(name, pn)
  41. baidu.run()

案例2:金山词霸翻译(POST请求)

  1. import requests
  2. import sys
  3. import json
  4. class JinshanCiBa:
  5. def __init__(self, words):
  6. self.url = "http://fy.iciba.com/ajax.php?a=fy"
  7. self.headers = {
  8. "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 SE 2.X MetaSr 1.0",
  9. "X-Requested-With": "XMLHttpRequest"
  10. }
  11. self.post_data = {
  12. "f": "auto",
  13. "t": "auto",
  14. "w": words
  15. }
  16. def get_data(self):
  17. """
  18. 请求数据
  19. :param url:
  20. :return:
  21. """
  22. response = requests.post(self.url, data=self.post_data, headers=self.headers)
  23. return response.text
  24. def show_translation(self):
  25. """
  26. 显示翻译结果
  27. :param data:
  28. :param num:
  29. :return:
  30. """
  31. response = self.get_data()
  32. json_data = json.loads(response, encoding='utf-8')
  33. if json_data['status'] == 0:
  34. translation = json_data['content']['word_mean']
  35. elif json_data['status'] == 1:
  36. translation = json_data['content']['out']
  37. else:
  38. translation = None
  39. print(translation)
  40. def run(self):
  41. self.show_translation()
  42. if __name__ == "__main__":
  43. words = sys.argv[1]
  44. ciba = JinshanCiBa(words)
  45. ciba.run()

案例3:百度贴吧图片爬取

(1)普通版

从已下载页面中提取url来爬取图片(页面下载方法见案例1)

  1. from lxml import etree
  2. import requests
  3. class DownloadPhoto:
  4. def __init__(self):
  5. pass
  6. def download_img(self, url):
  7. response = requests.get(url)
  8. index = url.rfind('/')
  9. file_name = url[index + 1:]
  10. print("下载图片:" + file_name)
  11. save_name = "./photo/" + file_name
  12. with open(save_name, "wb") as f:
  13. f.write(response.content)
  14. def parse_photo_url(self, page):
  15. html = etree.parse(page, etree.HTMLParser())
  16. nodes = html.xpath("//a[contains(@class, 'thumbnail')]/img/@bpic")
  17. print(nodes)
  18. print(len(nodes))
  19. for node in nodes:
  20. self.download_img(node)
  21. if __name__ == "__main__":
  22. download = DownloadPhoto()
  23. for i in range(6000):
  24. download.parse_photo_url("./pages/校花_{}.html".format(i))

(2)多线程版

main.py

  1. import requests
  2. from lxml import etree
  3. from file_download import DownLoadExecutioner, file_download
  4. class XiaoHua:
  5. def __init__(self, init_url):
  6. self.init_url = init_url
  7. self.download_executioner = DownLoadExecutioner()
  8. def start(self):
  9. self.download_executioner.start()
  10. self.download_img(self.init_url)
  11. def download_img(self, url):
  12. html_text = file_download(url, type='text')
  13. html = etree.HTML(html_text)
  14. img_urls = html.xpath("//a[contains(@class,'thumbnail')]/img/@bpic")
  15. self.download_executioner.put_task(img_urls)
  16. # 获取下一页的连接
  17. next_page = html.xpath("//div[@id='frs_list_pager']/a[contains(@class,'next')]/@href")
  18. next_page = "http:" + next_page[0]
  19. self.download_img(next_page)
  20. if __name__ == '__main__':
  21. x = XiaoHua("http://tieba.baidu.com/f?kw=校花&ie=utf-8")
  22. x.start()

file_download.py

  1. import requests
  2. import threading
  3. from queue import Queue
  4. def file_download(url, type='content'):
  5. headers = {
  6. 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko'
  7. }
  8. r = requests.get(url, headers=headers)
  9. if type == 'text':
  10. return r.text
  11. return r.content
  12. class DownLoadExecutioner(threading.Thread):
  13. def __init__(self):
  14. super().__init__()
  15. self.q = Queue(maxsize=50)
  16. # 图片保存目录
  17. self.save_dir = './img/'
  18. # 图片计数
  19. self.index = 0
  20. def put_task(self, urls):
  21. if isinstance(urls, list):
  22. for url in urls:
  23. self.q.put(url)
  24. else:
  25. self.q.put(urls)
  26. def run(self):
  27. while True:
  28. url = self.q.get()
  29. content = file_download(url)
  30. # 截取图片名称
  31. index = url.rfind('/')
  32. file_name = url[index+1:]
  33. save_name = self.save_dir + file_name
  34. with open(save_name, 'wb+') as f:
  35. f.write(content)
  36. self.index += 1
  37. print(save_name + "下载成功! 当前已下载图片总数:" + str(self.index))

(3)线程池版

main.py

  1. import requests
  2. from lxml import etree
  3. from file_download_pool import DownLoadExecutionerPool, file_download
  4. class XiaoHua:
  5. def __init__(self, init_url):
  6. self.init_url = init_url
  7. self.download_executioner = DownLoadExecutionerPool()
  8. def start(self):
  9. self.download_img(self.init_url)
  10. def download_img(self, url):
  11. html_text = file_download(url, type='text')
  12. html = etree.HTML(html_text)
  13. img_urls = html.xpath("//a[contains(@class,'thumbnail')]/img/@bpic")
  14. self.download_executioner.put_task(img_urls)
  15. # 获取下一页的连接
  16. next_page = html.xpath("//div[@id='frs_list_pager']/a[contains(@class,'next')]/@href")
  17. next_page = "http:" + next_page[0]
  18. self.download_img(next_page)
  19. if __name__ == '__main__':
  20. x = XiaoHua("http://tieba.baidu.com/f?kw=校花&ie=utf-8")
  21. x.start()

file_download_pool.py

  1. import requests
  2. import concurrent.futures as futures
  3. def file_download(url, type='content'):
  4. headers = {
  5. 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko'
  6. }
  7. r = requests.get(url, headers=headers)
  8. if type == 'text':
  9. return r.text
  10. return r.content
  11. class DownLoadExecutionerPool():
  12. def __init__(self):
  13. super().__init__()
  14. # 图片保存目录
  15. self.save_dir = './img_pool/'
  16. # 图片计数
  17. self.index = 0
  18. # 线程池
  19. self.ex = futures.ThreadPoolExecutor(max_workers=30)
  20. def put_task(self, urls):
  21. if isinstance(urls, list):
  22. for url in urls:
  23. self.ex.submit(self.save_img, url)
  24. else:
  25. self.ex.submit(self.save_img, urls)
  26. def save_img(self, url):
  27. content = file_download(url)
  28. # 截取图片名称
  29. index = url.rfind('/')
  30. file_name = url[index+1:]
  31. save_name = self.save_dir + file_name
  32. with open(save_name, 'wb+') as f:
  33. f.write(content)
  34. self.index += 1
  35. print(save_name + "下载成功! 当前已下载图片总数:" + str(self.index))

作者:Recalcitrant
链接:Requests - 简书

相关技术文章

点击QQ咨询
开通会员
返回顶部
×
微信扫码支付
微信扫码支付
确定支付下载
请使用微信描二维码支付
×

提示信息

×

选择支付方式

  • 微信支付
  • 支付宝付款
确定支付下载