我试图让以下工作:
import csv
csv_record_counter = 1
csv_file_counter = 1
while csv_record_counter <= 1000000:
with open('some_csv_file_' + str(csv_file_counter) + '.csv', 'w') as csvfile:
output_writer = csv.writer(csvfile, lineterminator = "\n")
output_writer.writerow(['record'])
csv_record_counter += 1
while not csv_record_counter <= 1000000:
csv_record_counter = 1
csv_file_counter += 1
问题:当记录增加到1000000以上时,不会创建后续文件。脚本将继续向原始文件中添加记录。
我喜欢在导出数据之前对其进行批处理。
def batch(iterable, n=1):
length = len(iterable)
for ndx in range(0, length, n):
yield iterable[ndx:min(ndx + n, length)]
headers = [] # Your headers
products = [] # Milions of products go here
batch_size = int(len(db_products) / 4) # Example
# OR in your case, batch_size = 1000000000
for idx, product_batch in enumerate(batch(products, batch_size)):
with open('products_{}.csv'.format(idx + 1), 'w') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=headers)
writer.writeheader()
for product in product_batch:
writer.writerow(product)
参考资料:
首先缩进你的第二个同时循环,并删除“不是”。然后使用for-而不是whing-循环来创建csvs。此外,不要忘记重置csv_record_counter。
import csv
csv_record_counter = 1
rows = #Your number of rows to process
additional_file = 1 if rows/1000000 % 2 != 0 else 0
for csv_file in range(1, int(rows/1000000) + 1 + additional_file): #Set rows as your maximum number of rows / This will return your number of csv to create
with open('some_csv_file_' + str(csv_file) + '.csv', 'w') as csvfile:
output_writer = csv.writer(csvfile, lineterminator = "\n")
output_writer.writerow(['record'])
csv_record_counter = 1 #Remove your "+"
while csv_record_counter <= 1000000: #Remove your "not"
csv_record_counter += 1
output_writer.writerow("your record")
编辑:添加了附加的_文件
尝试writefile。使用
编写器后刷新()。writerow()
该flush语句将清除缓冲区,使RAM可以自由完成新任务。
当处理大量行时,缓冲区将被任务填满,直到当前运行的代码退出,缓冲区才会被清除。
所以,每次使用write语句在文件中写入内容时,最好手动清除缓冲区