关键词搜索

源码搜索 ×
×

scrapy爬虫框架你还不会吗?简单使用爬虫框架采集网站数据

发布2021-01-20浏览376次

详情内容

前言
本文的文字及图片过滤网络,可以学习,交流使用,不具有任何商业用途,如有问题请及时联系我们以作处理。

本篇文章就使用python爬虫框架scrapy采集网站的一些数据。

Python爬虫、数据分析、网站开发等案例教程视频免费在线观看

https://www.xin3721.com/eschool/pythonxin3721/

基本开发环境
Python 3.6
pycharm
如何安装scrapy

在cmd命令行当中 pip install scrapy就可以安装了。但是一般情况都会出现网络超时的情况。

建议切换国内常规源安装 pip install -i国内常规地址包名

例如:

pip install -i https://mirrors.aliyun.com/pypi/simple/ scrapy

国内常用源别名地址:

清华:https://pypi.tuna.tsinghua.edu.cn/simple
阿里云:http://mirrors.aliyun.com/pypi/simple/
中国科技大学 https://pypi.mirrors.ustc.edu.cn/simple/
华中理工大学:http://pypi.hustunique.com/
山东理工大学:http://pypi.sdutlinux.org/ 
豆瓣:http://pypi.douban.com/simple/

    你可能会出现的报错:

    在安装Scrapy的过程中可能会遇到VC ++等错误,可以安装删除模块的离线包

    在这里插入图片描述

    Scrapy如何爬取网站数据

    本篇文章以豆瓣电影Top250的数据为例,讲解一下scrapy框架爬取数据的基本流程。

    在这里插入图片描述

    豆瓣Top250这个数据就不过多分析,静态网站,网页结构十分适合写爬取,所以很多基础入门的爬虫案例都是以豆瓣电影数据以及猫眼电影数据为例的。

    Scrapy的爬虫项目的创建流程
    1.创建一个爬虫项目

    在Pycharm中选择Terminal在Local里面python基础教程输入

    scrapy startproject +(项目名字<独一无二>)

    在这里插入图片描述

    2.cd切换到爬虫项目目录

    在这里插入图片描述

    3.创建爬虫文件

    scrapy genspider(+爬虫文件的名字<独一无二的>)(+域名限制)

    在这里插入图片描述

    在这里插入图片描述

    这就对于scrapy的项目创建以及爬虫文件创建完成了。

    Scrapy的爬虫代码编写
    1,在settings.py文件中关闭robots协议默认是True

    在这里插入图片描述

    2,在爬虫文件下修改起始网址

    start_urls = [‘https://movie.douban.com/top250?filter=’]

    把start_urls改成豆瓣导航网址的链接,也就是你爬取c#教程数据的第一页的url地址

    3,写解析数据的业务逻辑

    爬取内容如下:

    在这里插入图片描述

    douban_info.py

    import scrapy
    
    
    from ..items import DoubanItem
    
    
    
    
    class DoubanInfoSpider(scrapy.Spider):
        name = 'douban_info'
        allowed_domains = ['douban.com']
        start_urls = ['https://movie.douban.com/top250?start=0&filter=']
    
    
        def parse(self, response):
            lis = response.css('.grid_view li')
            print(lis)
            for li in lis:
                title = li.css('.hd span:nth-child(1)::text').get()
                movie_info = li.css('.bd p::text').getall()
                info = ''.join(movie_info).strip()
                score = li.css('.rating_num::text').get()
                number = li.css('.star span:nth-child(4)::text').get()
                summary = li.css('.inq::text').get()
                print(title)
                yield DoubanItem(title=title, info=info, score=score, number=number, summary=summary)
    
    
            href = response.css('#content .next a::attr(href)').get()
            if href:
                next_url = 'https://movie.douban.com/top250' + href
                yield scrapy.Request(url=next_url, callback=self.parse)
    
      7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32

    itmes.py

    import scrapy
    
    
    
    
    class DoubanItem(scrapy.Item):
        # define the fields for your item here like:
        title = scrapy.Field()
        info = scrapy.Field()
        score = scrapy.Field()
        number = scrapy.Field()
        summary = scrapy.Field()
    
      7
    • 8
    • 9
    • 10
    • 11
    • 12

    middlewares.py

    import faker
    
    
    
    
    def get_cookies():
        """获取cookies的函数"""
        headers = {
            'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.111 Safari/537.36'}
    
    
        response = requests.get(url='https://movie.douban.com/top250?start=0&filter=',
                                headers=headers)
        return response.cookies.get_dict()
    
    
    
    
    def get_proxies():
        """代理请求的函数"""
        proxy_data = requests.get(url='http://127.0.0.1:5000/get/').json()
        return proxy_data['proxy']
    
    
    
    
    class HeadersDownloaderMiddleware:
        """headers中间件"""
    
    
        def process_request(self, request, spider):
            # 可以拿到请求体
            fake = faker.Faker()
            # request.headers  拿到请求头, 请求头是一个字典
            request.headers.update(
                {
                    'user-agent': fake.user_agent(),
                }
            )
            return None
    
    
    class CookieDownloaderMiddleware:
        """cookie中间件"""
    
    
        def process_request(self, request, spider):
            # request.cookies 设置请求的cookies, 是字典
            # get_cookies()  调用获取cookies的方法
            request.cookies.update(get_cookies())
            return None
    
    
    class ProxyDownloaderMiddleware:
        """代理中间件"""
    
    
        def process_request(self, request, spider):
            # 获取请求的 meta , 字典
            request.meta['proxy'] = get_proxies()
            return None
    
    
      7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62

    pipelines.py

    import csv
    
    
    
    
    class DoubanPipeline:
        def __init__(self):
            self.file = open('douban.csv', mode='a', encoding='utf-8', newline='')
            self.csv_file = csv.DictWriter(self.file, fieldnames=['title', 'info', 'score', 'number', 'summary'])
            self.csv_file.writeheader()
    
    
        def process_item(self, item, spider):
            dit = dict(item)
            dit['info'] = dit['info'].replace('\n', "").strip()
            self.csv_file.writerow(dit)
            return item
    
    
    
    
        def spider_closed(self, spider) -> None:
            self.file.close()
    
      7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23

    setting.py

    # Scrapy settings for douban project
    #
    # For simplicity, this file contains only settings considered important or
    # commonly used. You can find more settings consulting the documentation:
    #
    #     https://docs.scrapy.org/en/latest/topics/settings.html
    #     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
    #     https://docs.scrapy.org/en/latest/topics/spider-middleware.html
    
    
    BOT_NAME = 'douban'
    
    
    SPIDER_MODULES = ['douban.spiders']
    NEWSPIDER_MODULE = 'douban.spiders'
    
    
    
    
    # Crawl responsibly by identifying yourself (and your website) on the user-agent
    #USER_AGENT = 'douban (+http://www.yourdomain.com)'
    
    
    # Obey robots.txt rules
    ROBOTSTXT_OBEY = False
    
    
    # Configure maximum concurrent requests performed by Scrapy (default: 16)
    #CONCURRENT_REQUESTS = 32
    
    
    # Configure a delay for requests for the same website (default: 0)
    # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
    # See also autothrottle settings and docs
    #DOWNLOAD_DELAY = 3
    # The download delay setting will honor only one of:
    #CONCURRENT_REQUESTS_PER_DOMAIN = 16
    #CONCURRENT_REQUESTS_PER_IP = 16
    
    
    # Disable cookies (enabled by default)
    #COOKIES_ENABLED = False
    
    
    # Disable Telnet Console (enabled by default)
    #TELNETCONSOLE_ENABLED = False
    
    
    # Override the default request headers:
    #DEFAULT_REQUEST_HEADERS = {
    #   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    #   'Accept-Language': 'en',
    #}
    
    
    # Enable or disable spider middlewares
    # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
    # SPIDER_MIDDLEWARES = {
    #    'douban.middlewares.DoubanSpiderMiddleware': 543,
    # }
    
    
    # Enable or disable downloader middlewares
    # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
    DOWNLOADER_MIDDLEWARES = {
       'douban.middlewares.HeadersDownloaderMiddleware': 543,
    }
    
    
    # Enable or disable extensions
    # See https://docs.scrapy.org/en/latest/topics/extensions.html
    #EXTENSIONS = {
    #    'scrapy.extensions.telnet.TelnetConsole': None,
    #}
    
    
    # Configure item pipelines
    # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
    ITEM_PIPELINES = {
       'douban.pipelines.DoubanPipeline': 300,
    }
    
    
    # Enable and configure the AutoThrottle extension (disabled by default)
    # See https://docs.scrapy.org/en/latest/topics/autothrottle.html
    #AUTOTHROTTLE_ENABLED = True
    # The initial download delay
    #AUTOTHROTTLE_START_DELAY = 5
    # The maximum download delay to be set in case of high latencies
    #AUTOTHROTTLE_MAX_DELAY = 60
    # The average number of requests Scrapy should be sending in parallel to
    # each remote server
    #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
    # Enable showing throttling stats for every response received:
    #AUTOTHROTTLE_DEBUG = False
    
    
    # Enable and configure HTTP caching (disabled by default)
    # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
    #HTTPCACHE_ENABLED = True
    #HTTPCACHE_EXPIRATION_SECS = 0
    #HTTPCACHE_DIR = 'httpcache'
    #HTTPCACHE_IGNORE_HTTP_CODES = []
    #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
    
      7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104

    4,运行爬虫vb.net教程
    程序

    在这里插入图片描述

    输入命令 scrapy crawl +爬虫文件名

    在这里插入图片描述

    在这里插入图片描述

    相关技术文章

    点击QQ咨询
    开通会员
    返回顶部
    ×
    微信扫码支付
    微信扫码支付
    确定支付下载
    请使用微信描二维码支付
    ×

    提示信息

    ×

    选择支付方式

    • 微信支付
    • 支付宝付款
    确定支付下载