Get size of a file before downloading in Python
我正在从Web服务器下载整个目录。 它可以正常工作,但是我无法弄清楚如何在下载之前获取文件大小以进行比较(如果服务器上没有更新)。 是否可以像从FTP服务器下载文件一样完成此操作?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
| import urllib
import re
url ="http://www.someurl.com"
# Download the page locally
f = urllib.urlopen(url)
html = f.read()
f.close()
f = open ("temp.htm","w")
f.write (html)
f.close()
# List only the .TXT / .ZIP files
fnames = re.findall('^.*<a href="(\w+(?:\.txt|.zip)?)".*$', html, re.MULTILINE)
for fname in fnames:
print fname,"..."
f = urllib.urlopen(url +"/" + fname)
#### Here I want to check the filesize to download or not ####
file = f.read()
f.close()
f = open (fname,"w")
f.write (file)
f.close() |
@Jon:感谢您的快速回答。 它可以工作,但是Web服务器上的文件大小略小于下载文件的文件大小。
例子:
1 2 3
| Local Size Server Size
2.223.533 2.115.516
664.603 662.121 |
与CR / LF转换有关吗?
我转载了您所看到的:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
| import urllib, os
link ="http://python.org"
print"opening url:", link
site = urllib.urlopen(link)
meta = site.info()
print"Content-Length:", meta.getheaders("Content-Length")[0]
f = open("out.txt","r")
print"File on disk:",len(f.read())
f.close()
f = open("out.txt","w")
f.write(site.read())
site.close()
f.close()
f = open("out.txt","r")
print"File on disk after download:",len(f.read())
f.close()
print"os.stat().st_size returns:", os.stat("out.txt").st_size |
输出此:
1 2 3 4 5
| opening url: http://python.org
Content-Length: 16535
File on disk: 16535
File on disk after download: 16535
os.stat().st_size returns: 16861 |
我在这里做错了什么? os.stat()。st_size是否返回正确的大小?
编辑:
好的,我找出了问题所在:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
| import urllib, os
link ="http://python.org"
print"opening url:", link
site = urllib.urlopen(link)
meta = site.info()
print"Content-Length:", meta.getheaders("Content-Length")[0]
f = open("out.txt","rb")
print"File on disk:",len(f.read())
f.close()
f = open("out.txt","wb")
f.write(site.read())
site.close()
f.close()
f = open("out.txt","rb")
print"File on disk after download:",len(f.read())
f.close()
print"os.stat().st_size returns:", os.stat("out.txt").st_size |
输出:
1 2 3 4 5 6
| $ python test.py
opening url: http://python.org
Content-Length: 16535
File on disk: 16535
File on disk after download: 16535
os.stat().st_size returns: 16535 |
确保打开两个文件以进行二进制读/写。
1 2 3 4
| // open for binary write
open(filename,"wb")
// open for binary read
open(filename,"rb") |
使用return-urllib-object方法info(),可以获得有关已检索文档的各种信息。抓取当前Google徽标的示例:
1 2 3 4 5 6 7 8 9 10 11 12
| >>> import urllib
>>> d = urllib.urlopen("/d/jc/2023041121/cv43r5lwchw46.webp")
>>> print d.info()
Content-Type: image/gif
Last-Modified: Thu, 07 Aug 2008 16:20:19 GMT
Expires: Sun, 17 Jan 2038 19:14:07 GMT
Cache-Control: public
Date: Fri, 08 Aug 2008 13:40:41 GMT
Server: gws
Content-Length: 20172
Connection: Close |
这是一个决定,因此要获取文件的大小,请执行urllibobject.info()['Content-Length']
1
| print f.info()['Content-Length'] |
要获取本地文件的大小(用于比较),可以使用os.stat()命令:
1
| os.stat("/the/local/file.zip").st_size |
文件的大小作为Content-Length标头发送。这是使用urllib的方法:
1 2 3 4 5
| >>> site = urllib.urlopen("http://python.org")
>>> meta = site.info()
>>> print meta.getheaders("Content-Length")
['16535']
>>> |
另外,如果您要连接的服务器支持它,请查看Etags以及If-Modified-Since和If-None-Match标头。
使用这些将利用Web服务器的缓存规则,并且如果内容未更改,则将返回304未修改状态代码。
使用HEAD而不是GET的基于请求的解决方案(还会显示HTTP标头):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
| #!/usr/bin/python
# display size of a remote file without downloading
from __future__ import print_function
import sys
import requests
# number of bytes in a megabyte
MBFACTOR = float(1 << 20)
response = requests.head(sys.argv[1], allow_redirects=True)
print("
".join([('{:<40}: {}'.format(k, v)) for k, v in response.headers.items()]))
size = response.headers.get('content-length', 0)
print('{:<40}: {:.2f} MB'.format('FILE SIZE', int(size) / MBFACTOR)) |
用法
1 2 3 4
| $ python filesize-remote-url.py https://httpbin.org/image/jpeg
...
Content-Length : 35588
FILE SIZE (MB) : 0.03 MB |
对于python3(在3.5上测试),我建议:
1 2 3
| with urlopen(file_url) as in_file, open(local_file_address, 'wb') as out_file:
print(in_file.getheader('Content-Length'))
out_file.write(response.read()) |
在Python3中:
1 2 3
| >>> import urllib.request
>>> site = urllib.request.urlopen("http://python.org")
>>> print("FileSize:", site.length) |
|