提问者:小点点

对于从S3存储桶中读取许多小对象,Boto3比Boto2慢得多


我注意到boto3从S3桶中读取相同对象的时间大约是boto2的3倍。下面的Python脚本说明了这个问题。我的环境是Ubuntu 18.04,Python 3.7.9,boto 2.49.0,boto3 1.16.63。

该脚本使用20个线程从S3桶中读取1000个对象。使用boto2需要5-6秒,但是使用boto3需要18-19秒。

我试过不同数量的线程。我已尝试在boto3文件传输配置中设置max\u并发性。这些事情似乎没有什么不同。

有人能解释为什么boto3慢了这么多,或者如何让它更快吗?

#!/usr/bin/python -u

"""
This script compares the performance of boto2 and boto3 for reading 1,000 small objects from an S3 bucket.
You'll need to change the value of BUCKET_NAME to the name of a bucket to which the script has read/write access.
"""

import boto
import boto3
from tempfile import NamedTemporaryFile
from threading import Thread
import time

BUCKET_NAME = 'deleteme-steve'
bucket2 = boto.connect_s3().get_bucket(BUCKET_NAME)
s3_boto3 = boto3.client('s3')

# Create 1,000 test objects in an S3 bucket. Once the objects exist, this code can be commented..
with NamedTemporaryFile(mode='wt') as ntf:
    ntf.write('This is a test')
    ntf.flush()
    for i in range(1000):
        s3_boto3.upload_file(ntf.name, BUCKET_NAME, 'test{}'.format(i))

def read2(i):
    for j in range(50 * i, 50 * (i + 1)):
        k = bucket2.get_key('test{}'.format(j))
        with NamedTemporaryFile() as ntf:
            k.get_contents_to_file(ntf)

def read3(i):
    for j in range(50 * i, 50 * (i + 1)):
        with NamedTemporaryFile() as ntf:
            s3_boto3.download_fileobj(BUCKET_NAME, 'test{}'.format(j), ntf)

for boto_version in [2, 3]:
    threads = []
    start_time = time.time()
    for i in range(20):
        t = Thread(target=read2 if boto_version == 2 else read3, args=(i,))
        threads.append(t)
        t.start()
    for t in threads:
        t.join()
    print('boto {}: {} seconds'.format(boto_version, time.time() - start_time))

共1个答案

匿名用户

事实证明,boto3的缓慢发生在使用Python 2(不再支持)时,而不是Python 3。在Python 3中,boto2和boto3在我的测试中速度大致相等。