Python爬虫性能优化总结有哪些要点?

2026-05-16 17:431阅读0评论SEO资源
  • 内容介绍
  • 文章标签
  • 相关推荐

本文共计1171个文字,预计阅读时间需要5分钟。

Python爬虫性能优化总结有哪些要点?

这里我们通过请求网页例子来一步步理解爬虫性能。当我们有一个列表存储了一些URL,需要获取相关数据时,我们首先想到的是循环+简单的循环串行+这种方法相对较慢,因为。

这里我们通过请求网页例子来一步步理解爬虫性能

当我们有一个列表存放了一些url需要我们获取相关数据,我们首先想到的是循环

简单的循环串行

这一种方法相对来说是最慢的,因为一个一个循环,耗时是最长的,是所有的时间总和
代码如下:

import requests url_list = [ 'www.baidu.com', 'www.pythonsite.com', 'www.cnblogs.com/' ] for url in url_list: result = requests.get(url) print(result.text)

通过线程池

通过线程池的方式访问,这样整体的耗时是所有连接里耗时最久的那个,相对循环来说快了很多

import requests from concurrent.futures import ThreadPoolExecutor def fetch_request(url): result = requests.get(url) print(result.text) url_list = [ 'www.baidu.com', 'www.bing.com', 'www.cnblogs.com/' ] pool = ThreadPoolExecutor(10) for url in url_list: #去线程池中获取一个线程,线程去执行fetch_request方法 pool.submit(fetch_request,url) pool.shutdown(True)

线程池+回调函数

这里定义了一个回调函数callback

from concurrent.futures import ThreadPoolExecutor import requests def fetch_async(url): response = requests.get(url) return response def callback(future): print(future.result().text) url_list = [ 'www.baidu.com', 'www.bing.com', 'www.cnblogs.com/' ] pool = ThreadPoolExecutor(5) for url in url_list: v = pool.submit(fetch_async,url) #这里调用回调函数 v.add_done_callback(callback) pool.shutdown()

通过进程池

通过进程池的方式访问,同样的也是取决于耗时最长的,但是相对于线程来说,进程需要耗费更多的资源,同时这里是访问url时IO操作,所以这里线程池比进程池更好

import requests from concurrent.futures import ProcessPoolExecutor def fetch_request(url): result = requests.get(url) print(result.text) url_list = [ 'www.baidu.com', 'www.bing.com', 'www.cnblogs.com/' ] pool = ProcessPoolExecutor(10) for url in url_list: #去进程池中获取一个线程,子进程程去执行fetch_request方法 pool.submit(fetch_request,url) pool.shutdown(True)

进程池+回调函数

这种方式和线程+回调函数的效果是一样的,相对来说开进程比开线程浪费资源

from concurrent.futures import ProcessPoolExecutor import requests def fetch_async(url): response = requests.get(url) return response def callback(future): print(future.result().text) url_list = [ 'www.baidu.com', 'www.bing.com', 'www.cnblogs.com/' ] pool = ProcessPoolExecutor(5) for url in url_list: v = pool.submit(fetch_async, url) # 这里调用回调函数 v.add_done_callback(callback) pool.shutdown()

主流的单线程实现并发的几种方式

  1. asyncio
  2. gevent
  3. Twisted
  4. Tornado

下面分别是这四种代码的实现例子:

asyncio例子1:

import asyncio @asyncio.coroutine #通过这个装饰器装饰 def func1(): print('before...func1......') # 这里必须用yield from,并且这里必须是asyncio.sleep不能是time.sleep yield from asyncio.sleep(2) print('end...func1......') tasks = [func1(), func1()] loop = asyncio.get_event_loop() loop.run_until_complete(asyncio.gather(*tasks)) loop.close()

上述的效果是同时会打印两个before的内容,然后等待2秒打印end内容
这里asyncio并没有提供我们发送baidu.com/'), fetch_async('www.chouti.com/')] event_loop = asyncio.get_event_loop() results = event_loop.run_until_complete(asyncio.gather(*tasks)) event_loop.close()

asyncio+requests代码例子

Python爬虫性能优化总结有哪些要点?

import asyncio import requests @asyncio.coroutine def fetch_async(func, *args): loop = asyncio.get_event_loop() future = loop.run_in_executor(None, func, *args) response = yield from future print(response.url, response.content) tasks = [ fetch_async(requests.get, 'www.cnblogs.com/wupeiqi/'), fetch_async(requests.get, 'dig.chouti.com/pic/show?nid=4073644713430508&lid=10273091') ] loop = asyncio.get_event_loop() results = loop.run_until_complete(asyncio.gather(*tasks)) loop.close()

gevent+requests代码例子

import gevent import requests from gevent import monkey monkey.patch_all() def fetch_async(method, url, req_kwargs): print(method, url, req_kwargs) response = requests.request(method=method, url=url, **req_kwargs) print(response.url, response.content) # ##### 发送请求 ##### gevent.joinall([ gevent.spawn(fetch_async, method='get', url='www.python.org/', req_kwargs={}), gevent.spawn(fetch_async, method='get', url='www.yahoo.com/', req_kwargs={}), gevent.spawn(fetch_async, method='get', url='github.com/', req_kwargs={}), ]) # ##### 发送请求(协程池控制最大协程数量) ##### # from gevent.pool import Pool # pool = Pool(None) # gevent.joinall([ # pool.spawn(fetch_async, method='get', url='www.python.org/', req_kwargs={}), # pool.spawn(fetch_async, method='get', url='www.yahoo.com/', req_kwargs={}), # pool.spawn(fetch_async, method='get', url='www.github.com/', req_kwargs={}), # ])

grequests代码例子
这个是讲requests+gevent进行了封装

import grequests request_list = [ grequests.get('fakedomain/'), grequests.get('www.bing.com', 'www.baidu.com', ] for url in url_list: deferred = getPage(bytes(url, encoding='utf8')) deferred.addCallback(callback) deferred_list.append(deferred) #这里就是进就行一种检测,判断所有的请求知否执行完毕 dlist = defer.DeferredList(deferred_list) dlist.addBoth(all_done) reactor.run()

tornado代码例子

from tornado.www.baidu.com', 'www.bing.com', ] for url in url_list: print(url) http_client = AsyncHTTPClient() http_client.fetch(HTTPRequest(url), handle_response) ioloop.IOLoop.current().add_callback(func) ioloop.IOLoop.current().start()

以上就是Python 爬虫性能相关总结的详细内容,更多关于Python 爬虫性能的资料请关注易盾网络其它相关文章!

本文共计1171个文字,预计阅读时间需要5分钟。

Python爬虫性能优化总结有哪些要点?

这里我们通过请求网页例子来一步步理解爬虫性能。当我们有一个列表存储了一些URL,需要获取相关数据时,我们首先想到的是循环+简单的循环串行+这种方法相对较慢,因为。

这里我们通过请求网页例子来一步步理解爬虫性能

当我们有一个列表存放了一些url需要我们获取相关数据,我们首先想到的是循环

简单的循环串行

这一种方法相对来说是最慢的,因为一个一个循环,耗时是最长的,是所有的时间总和
代码如下:

import requests url_list = [ 'www.baidu.com', 'www.pythonsite.com', 'www.cnblogs.com/' ] for url in url_list: result = requests.get(url) print(result.text)

通过线程池

通过线程池的方式访问,这样整体的耗时是所有连接里耗时最久的那个,相对循环来说快了很多

import requests from concurrent.futures import ThreadPoolExecutor def fetch_request(url): result = requests.get(url) print(result.text) url_list = [ 'www.baidu.com', 'www.bing.com', 'www.cnblogs.com/' ] pool = ThreadPoolExecutor(10) for url in url_list: #去线程池中获取一个线程,线程去执行fetch_request方法 pool.submit(fetch_request,url) pool.shutdown(True)

线程池+回调函数

这里定义了一个回调函数callback

from concurrent.futures import ThreadPoolExecutor import requests def fetch_async(url): response = requests.get(url) return response def callback(future): print(future.result().text) url_list = [ 'www.baidu.com', 'www.bing.com', 'www.cnblogs.com/' ] pool = ThreadPoolExecutor(5) for url in url_list: v = pool.submit(fetch_async,url) #这里调用回调函数 v.add_done_callback(callback) pool.shutdown()

通过进程池

通过进程池的方式访问,同样的也是取决于耗时最长的,但是相对于线程来说,进程需要耗费更多的资源,同时这里是访问url时IO操作,所以这里线程池比进程池更好

import requests from concurrent.futures import ProcessPoolExecutor def fetch_request(url): result = requests.get(url) print(result.text) url_list = [ 'www.baidu.com', 'www.bing.com', 'www.cnblogs.com/' ] pool = ProcessPoolExecutor(10) for url in url_list: #去进程池中获取一个线程,子进程程去执行fetch_request方法 pool.submit(fetch_request,url) pool.shutdown(True)

进程池+回调函数

这种方式和线程+回调函数的效果是一样的,相对来说开进程比开线程浪费资源

from concurrent.futures import ProcessPoolExecutor import requests def fetch_async(url): response = requests.get(url) return response def callback(future): print(future.result().text) url_list = [ 'www.baidu.com', 'www.bing.com', 'www.cnblogs.com/' ] pool = ProcessPoolExecutor(5) for url in url_list: v = pool.submit(fetch_async, url) # 这里调用回调函数 v.add_done_callback(callback) pool.shutdown()

主流的单线程实现并发的几种方式

  1. asyncio
  2. gevent
  3. Twisted
  4. Tornado

下面分别是这四种代码的实现例子:

asyncio例子1:

import asyncio @asyncio.coroutine #通过这个装饰器装饰 def func1(): print('before...func1......') # 这里必须用yield from,并且这里必须是asyncio.sleep不能是time.sleep yield from asyncio.sleep(2) print('end...func1......') tasks = [func1(), func1()] loop = asyncio.get_event_loop() loop.run_until_complete(asyncio.gather(*tasks)) loop.close()

上述的效果是同时会打印两个before的内容,然后等待2秒打印end内容
这里asyncio并没有提供我们发送baidu.com/'), fetch_async('www.chouti.com/')] event_loop = asyncio.get_event_loop() results = event_loop.run_until_complete(asyncio.gather(*tasks)) event_loop.close()

asyncio+requests代码例子

Python爬虫性能优化总结有哪些要点?

import asyncio import requests @asyncio.coroutine def fetch_async(func, *args): loop = asyncio.get_event_loop() future = loop.run_in_executor(None, func, *args) response = yield from future print(response.url, response.content) tasks = [ fetch_async(requests.get, 'www.cnblogs.com/wupeiqi/'), fetch_async(requests.get, 'dig.chouti.com/pic/show?nid=4073644713430508&lid=10273091') ] loop = asyncio.get_event_loop() results = loop.run_until_complete(asyncio.gather(*tasks)) loop.close()

gevent+requests代码例子

import gevent import requests from gevent import monkey monkey.patch_all() def fetch_async(method, url, req_kwargs): print(method, url, req_kwargs) response = requests.request(method=method, url=url, **req_kwargs) print(response.url, response.content) # ##### 发送请求 ##### gevent.joinall([ gevent.spawn(fetch_async, method='get', url='www.python.org/', req_kwargs={}), gevent.spawn(fetch_async, method='get', url='www.yahoo.com/', req_kwargs={}), gevent.spawn(fetch_async, method='get', url='github.com/', req_kwargs={}), ]) # ##### 发送请求(协程池控制最大协程数量) ##### # from gevent.pool import Pool # pool = Pool(None) # gevent.joinall([ # pool.spawn(fetch_async, method='get', url='www.python.org/', req_kwargs={}), # pool.spawn(fetch_async, method='get', url='www.yahoo.com/', req_kwargs={}), # pool.spawn(fetch_async, method='get', url='www.github.com/', req_kwargs={}), # ])

grequests代码例子
这个是讲requests+gevent进行了封装

import grequests request_list = [ grequests.get('fakedomain/'), grequests.get('www.bing.com', 'www.baidu.com', ] for url in url_list: deferred = getPage(bytes(url, encoding='utf8')) deferred.addCallback(callback) deferred_list.append(deferred) #这里就是进就行一种检测,判断所有的请求知否执行完毕 dlist = defer.DeferredList(deferred_list) dlist.addBoth(all_done) reactor.run()

tornado代码例子

from tornado.www.baidu.com', 'www.bing.com', ] for url in url_list: print(url) http_client = AsyncHTTPClient() http_client.fetch(HTTPRequest(url), handle_response) ioloop.IOLoop.current().add_callback(func) ioloop.IOLoop.current().start()

以上就是Python 爬虫性能相关总结的详细内容,更多关于Python 爬虫性能的资料请关注易盾网络其它相关文章!