2023年8月1日发(作者:)
Requests什么的通通爬不了的Python超强反爬⾍⽅案!⽬录⼀、前⾔⼆、反爬⾍三、措施⼀、前⾔⼀个⾮常强的反爬⾍⽅案 —— 禁⽤所有 HTTP 1.x 的请求!现在很多爬⾍库其实对 HTTP/2.0 ⽀持得不好,⽐如⼤名⿍⿍的 Python 库 —— requests,到现在为⽌还只⽀持 HTTP/1.1,啥时候⽀持 HTTP/2.0 还不知道。Scrapy 框架最新版本 2.5.0(2021.04.06 发布)加⼊了对 HTTP/2.0 的⽀持,但是官⽹明确提⽰,现在是实验性的功能,不推荐⽤到⽣产环境,原⽂如下:“HTTP/2 support in Scrapy is experimental, and not yet recommended for production environments. Future Scrapy versions may introduce related changes without a deprecation period orwarning.”插⼀句,Scrapy 中怎么⽀持 HTTP/2.0 呢?在 ⾥⾯换⼀下 Download Handlers 即可:DOWNLOAD_HANDLERS = { 'https': '2.H2DownloadHandler',}当前 Scrapy 的 HTTP/2.0 实现的已知限制包括:不⽀持 HTTP/2.0 明⽂(h2c),因为没有主流浏览器⽀持未加密的 HTTP/2.0。没有⽤于指定最⼤帧⼤⼩⼤于默认值 16384 的设置,发送更⼤帧的服务器的连接将失败。不⽀持服务器推送。不⽀持bytes_received和
headers_received信号。关于其他的⼀些库,也不必多说了,对 HTTP/2.0 的⽀持也不好,⽬前对 HTTP/2.0 ⽀持得还可以的有 hyper 和 httpx,后者更加简单易⽤⼀些。⼆、反爬⾍所以,你想到反爬⾍⽅案了吗?如果我们禁⽤所有的 HTTP/1.x 的请求,是不是能通杀掉⼀⼤半爬⾍?requests 没法⽤了,Scrapy 除⾮升级到最新版本才能勉强⽤个实验性版本,其他的语⾔也不⽤多说,也会杀⼀⼤部分。⽽浏览器对 HTTP/2.0 的⽀持现在已经很好了,所以不会影响⽤户浏览⽹页的体验。三、措施那就让我们来吧!这个怎么做呢?其实很简单,在 Nginx ⾥⾯配置⼀下就好了,主要就是加这么个判断就⾏了:if ($server_protocol !~* "HTTP/2.0") { return 444;}就是这么简单,这⾥
$server_protocol 就是传输协议,其结果⽬前有三个:HTTP/1.0、HTTP/1.1 和
HTTP/2.0,另外判断条件我们使⽤了
!~* ,意思就是不等于,这⾥的判断条件就是,如果不是
HTTP/2.0,那就直接返回 444 状态码,444 ⼀般代表 CONNECTION CLOSED WITHOUT RESPONSE,就是不返回任何结果关闭连接。官⽅⽤法如下:apiVersion: /v1beta1kind: Ingressmetadata: annotations: /server-snippet: | set $agentflag 0; if ($http_user_agent ~* "(Mobile)" ){ set $agentflag 1; } if ( $agentflag = 1 ) { return 301 ; }所以这⾥,我们只需要改成刚才的配置就好了:apiVersion: /v1beta1kind: Ingressmetadata: annotations: /server-snippet: | if ($server_protocol !~* "HTTP/2.0") { return 444; }⼤功告成!我们在浏览器中看下效果:可以看到所有请求都是⾛的 HTTP/2.0,页⾯完全正常加载。然⽽,我们使⽤ requests 来请求⼀下:import requestsresponse = ('/')print()⾮常欢乐的报错:Traceback (most recent call last): ... raise RemoteDisconnected("Remote end closed connection without"Disconnected: Remote end closed connection without response
During handling of the above exception, another exception occurred:
Traceback (most recent call last): ... raise MaxRetryError(_pool, url, error or ResponseError(cause))ryError: HTTPSConnectionPool(host='', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection withou
During handling of the above exception, another exception occurred:
Traceback (most recent call last): ...rror: HTTPSConnectionPool(host='', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response')))如果你⽤ requests,⽆论如何都不⾏的,因为它就不⽀持 HTTP/2.0。那我们换⼀个⽀持 HTTP/2.0 的库呢?⽐如 httpx,安装⽅法如下:pip3 install 'httpx[http2]'注意,Python 版本需要在 3.6 及以上才能⽤ httpx。安装好了之后测试下:import httpxclient = (http2=True)
response = ('/')print()结果如下:
http2 参数设置为 False 呢?import httpxclient = (http2=False)
response = ('/')print()⼀样很不幸:Traceback (most recent call last): ... raise RemoteProtocolError(msg)ProtocolError: Server disconnected without sending a response.
The above exception was the direct cause of the following exception: ... raise mapped_exc(message) from ProtocolError: Server disconnected without sending a response.所以,这就印证了,只要 HTTP/1.x 通通没法治!可以给 requests 烧⾹了!⼜⼀个⽆敌反爬⾍诞⽣了!各⼤站长们,安排起来吧~到此这篇关于Requests什么的通通爬不了的Python超强反爬⾍⽅案!的⽂章就介绍到这了,更多相关Python反爬⾍内容请搜索以前的⽂章或继续浏览下⾯的相关⽂章希望⼤家以后多多⽀持!
2023年8月1日发(作者:)
Requests什么的通通爬不了的Python超强反爬⾍⽅案!⽬录⼀、前⾔⼆、反爬⾍三、措施⼀、前⾔⼀个⾮常强的反爬⾍⽅案 —— 禁⽤所有 HTTP 1.x 的请求!现在很多爬⾍库其实对 HTTP/2.0 ⽀持得不好,⽐如⼤名⿍⿍的 Python 库 —— requests,到现在为⽌还只⽀持 HTTP/1.1,啥时候⽀持 HTTP/2.0 还不知道。Scrapy 框架最新版本 2.5.0(2021.04.06 发布)加⼊了对 HTTP/2.0 的⽀持,但是官⽹明确提⽰,现在是实验性的功能,不推荐⽤到⽣产环境,原⽂如下:“HTTP/2 support in Scrapy is experimental, and not yet recommended for production environments. Future Scrapy versions may introduce related changes without a deprecation period orwarning.”插⼀句,Scrapy 中怎么⽀持 HTTP/2.0 呢?在 ⾥⾯换⼀下 Download Handlers 即可:DOWNLOAD_HANDLERS = { 'https': '2.H2DownloadHandler',}当前 Scrapy 的 HTTP/2.0 实现的已知限制包括:不⽀持 HTTP/2.0 明⽂(h2c),因为没有主流浏览器⽀持未加密的 HTTP/2.0。没有⽤于指定最⼤帧⼤⼩⼤于默认值 16384 的设置,发送更⼤帧的服务器的连接将失败。不⽀持服务器推送。不⽀持bytes_received和
headers_received信号。关于其他的⼀些库,也不必多说了,对 HTTP/2.0 的⽀持也不好,⽬前对 HTTP/2.0 ⽀持得还可以的有 hyper 和 httpx,后者更加简单易⽤⼀些。⼆、反爬⾍所以,你想到反爬⾍⽅案了吗?如果我们禁⽤所有的 HTTP/1.x 的请求,是不是能通杀掉⼀⼤半爬⾍?requests 没法⽤了,Scrapy 除⾮升级到最新版本才能勉强⽤个实验性版本,其他的语⾔也不⽤多说,也会杀⼀⼤部分。⽽浏览器对 HTTP/2.0 的⽀持现在已经很好了,所以不会影响⽤户浏览⽹页的体验。三、措施那就让我们来吧!这个怎么做呢?其实很简单,在 Nginx ⾥⾯配置⼀下就好了,主要就是加这么个判断就⾏了:if ($server_protocol !~* "HTTP/2.0") { return 444;}就是这么简单,这⾥
$server_protocol 就是传输协议,其结果⽬前有三个:HTTP/1.0、HTTP/1.1 和
HTTP/2.0,另外判断条件我们使⽤了
!~* ,意思就是不等于,这⾥的判断条件就是,如果不是
HTTP/2.0,那就直接返回 444 状态码,444 ⼀般代表 CONNECTION CLOSED WITHOUT RESPONSE,就是不返回任何结果关闭连接。官⽅⽤法如下:apiVersion: /v1beta1kind: Ingressmetadata: annotations: /server-snippet: | set $agentflag 0; if ($http_user_agent ~* "(Mobile)" ){ set $agentflag 1; } if ( $agentflag = 1 ) { return 301 ; }所以这⾥,我们只需要改成刚才的配置就好了:apiVersion: /v1beta1kind: Ingressmetadata: annotations: /server-snippet: | if ($server_protocol !~* "HTTP/2.0") { return 444; }⼤功告成!我们在浏览器中看下效果:可以看到所有请求都是⾛的 HTTP/2.0,页⾯完全正常加载。然⽽,我们使⽤ requests 来请求⼀下:import requestsresponse = ('/')print()⾮常欢乐的报错:Traceback (most recent call last): ... raise RemoteDisconnected("Remote end closed connection without"Disconnected: Remote end closed connection without response
During handling of the above exception, another exception occurred:
Traceback (most recent call last): ... raise MaxRetryError(_pool, url, error or ResponseError(cause))ryError: HTTPSConnectionPool(host='', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection withou
During handling of the above exception, another exception occurred:
Traceback (most recent call last): ...rror: HTTPSConnectionPool(host='', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response')))如果你⽤ requests,⽆论如何都不⾏的,因为它就不⽀持 HTTP/2.0。那我们换⼀个⽀持 HTTP/2.0 的库呢?⽐如 httpx,安装⽅法如下:pip3 install 'httpx[http2]'注意,Python 版本需要在 3.6 及以上才能⽤ httpx。安装好了之后测试下:import httpxclient = (http2=True)
response = ('/')print()结果如下:
http2 参数设置为 False 呢?import httpxclient = (http2=False)
response = ('/')print()⼀样很不幸:Traceback (most recent call last): ... raise RemoteProtocolError(msg)ProtocolError: Server disconnected without sending a response.
The above exception was the direct cause of the following exception: ... raise mapped_exc(message) from ProtocolError: Server disconnected without sending a response.所以,这就印证了,只要 HTTP/1.x 通通没法治!可以给 requests 烧⾹了!⼜⼀个⽆敌反爬⾍诞⽣了!各⼤站长们,安排起来吧~到此这篇关于Requests什么的通通爬不了的Python超强反爬⾍⽅案!的⽂章就介绍到这了,更多相关Python反爬⾍内容请搜索以前的⽂章或继续浏览下⾯的相关⽂章希望⼤家以后多多⽀持!
发布评论