我想根据时间戳对 csv 文件中的值进行排序并将其打印到另一个文件,但是对于包含很多行的文件,python 内存不足(在读取文件时)。 我可以做些什么来提高效率,还是我应该使用其他东西而不是 csv.DictReader?
import csv, sys
import datetime
from pathlib import Path
localPath = "C:/MyPath"
# data variables
dataDir = localPath + "data/" dataExtension = ".dat"
pathlistData = Path(dataDir).glob('**/*'+ dataExtension)
# Generated filename as date, Format: YYYY-DDDTHH
generatedDataDir = localPath + "result/"
#generatedExtension = ".dat"
errorlog = 'errorlog.csv'
fieldnames = ['TimeStamp', 'A', 'B', 'C', 'C', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L','M', 'N', 'O', 'P', 'Q', 'R']
for dataPath in pathlistData:
#stores our data in a dictionary
dataDictionary = {}
dataFileName = str(dataPath).replace('\\', '/')
newFilePathString = dataFileName.replace(dataDir,generatedDataDir)
with open(dataPath, 'r') as readFile:
print(str("Reading data from " + dataFileName))
keysAsDate = []#[datetime.datetime.strptime(ts, "%Y-%m-%d") for ts in timestamps]
reader = csv.DictReader(readFile, fieldnames=fieldnames)
for row in reader:
try:
timestamp = row['TimeStamp']
#create a key based on the timestamp
timestampKey = datetime.datetime.strptime(timestamp[0:16], "%Y-%jT%H:%M:%S")
#save this key as a date, used later for sorting
keysAsDate.append(timestampKey)
#save the row data in a dictionary
dataDictionary[timestampKey] = row
except csv.Error as e:
sys.exit('file %s, line %d: %s' % (errorlog, reader.line_num, e))
#sort the keys
keysAsDate.sort()
readFile.close()
with open(newFilePathString, 'w') as writeFile:
writer = csv.DictWriter(writeFile, fieldnames=fieldnames, lineterminator='\n')
print(str("Writing data to " + newFilePathString))
#loop over the sorted keys
for idx in range(0, len(keysAsDate)):
#get the row from our data dictionary
writeRow = dataDictionary[keysAsDate[idx]]
#print(dataDictionary[keysAsDate[key]])
writer.writerow(writeRow)
if idx%30000 == 0:
print("Writing to new file: " + str(int(idx/len(keysAsDate) * 100)) + "%")
print(str("Finished writing to file: " + newFilePathString))
writeFile.close()
更新:我使用了 pandas 并将大文件分成较小的 block ,我可以单独对其进行排序。 如果我一个接一个地追加文件,目前这并不能解决严重错误放置值的问题。
for dataPath in pathlistData:
dataFileName = str(dataPath).replace('\\', '/')
#newFilePathString = dataFileName.replace(dataDir,generatedDataDir)
print(str("Reading data from " + dataFileName))
#divide our large data frame into smaller data frame chunks
#so we can sort the content in memory
for df_chunk in pd.read_csv(dataFileName, header = None, chunksize = chunk_size, names = fieldnames):
dataDictionary = {}
dataDictionary.clear()
for idx in range(0, chunk_size):
#print(df_chunk[idx:idx+1])
row = df_chunk[idx:idx+1]
dataDictionary = df_chunk.sort_values(['TimeStamp'], ascending=True)
firstTimeStampInChunk = dataDictionary[0:1]['TimeStamp']
#print("first: " + firstTimeStampInChunk)
lastTimeStampInChunk = dataDictionary[chunk_size-1:chunk_size]['TimeStamp']
#print("last: " + lastTimeStampInChunk)
timestampStr = str(firstTimeStampInChunk)[chunk_shift:timestamp_size+chunk_shift] + str(lastTimeStampInChunk)[chunk_shift:timestamp_size+chunk_shift]
tempFilePathString = str(timestampStr + dataExtension).replace(':', '_').replace('\\', '/')
dataDictionary.to_csv('temp/'+tempFilePathString, header = None, index=False)
# data variables
tempDataDir = localPath + "temp/"
tempPathlistData = Path(tempDataDir).glob('**/*'+ dataExtension)
tempPathList = list(tempPathlistData)
我解决随机值问题的算法理论(无代码)是:
第 1 步 - 分成更小的 block ,其中“ block 大小 = 内存中要处理的最大行数除以二”
第 2 步 - 按顺序遍历文件,一次合并两个文件并将它们排序在一起,然后再次拆分它们,因此没有文件大于 chunk_size。
第 3 步 - 向后循环,一次合并两个文件并对它们进行排序,然后再次拆分,因此没有文件大于 chunk_size。
第 4 步 - 现在所有错位的低值都应该到达最低部分,所有错位的高值都应该到达最高部分。按顺序追加文件!
缺点;这个的时间复杂度根本不是可取的,如果我没记错的话基本上是 O(N^2)
最佳答案
试试 pandas csv reader,效率很高。 (https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)。您可以使用 https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_dict.html 在 pandas 和字典之间轻松转换
关于Python csv.DictReader 内存不足,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54042346/