python - 如何对排序文件进行分组并保留组顺序

标签 python pandas sorting group-by pandas-groupby

我有一个大型 CSV 文件,它按其中的几列排序,我们将这些列称为 sorted_columns
我想对这些 sorted_columns 执行 groupby 并对每个组应用一些逻辑。

该文件不完全适合内存,因此我想分块读取它并对每个 block 执行groupby

我注意到的是,即使文件已经按这些列排序,组的顺序也不会保留。

最终,这就是我想要做的:

import pandas as pd

def run_logic(key, group):
    # some logic
    pass

last_group = pd.DataFrame()
last_key = None

for chunk_df in df:
    grouped_by_df = chunk_df.groupby(sorted_columns, sort=True)

    for key, group in grouped_by_df:
        if last_key is None or last_key == key:
            last_key = key
            last_group = pd.concat([last_group, group])
        else:  # last_key != key
            run_logic(last_key, last_group)
            last_key = key
            last_group = group.copy()
run_logic(last_key, last_group)

但这不起作用,因为 groupby 没有 promise 保留组的顺序。如果相同的key存在于两个连续的 block 中,则不保证在第一个 block 中它将是最后一组并且在下一个 block 中它将是第一个组。 我尝试将 groupby 更改为使用 sort=False,并尝试更改列的顺序,但没有帮助。

如果键已在原始文件中排序,有人知道如何保留组的顺序吗?

还有其他方法可以从文件中一次读取完整的组吗?

最佳答案

我认为您问题的本质是您试图在数据框中仅使用一次迭代来聚合每个组。需要权衡内存中容纳的组数与需要读取数据帧的次数

注意:我故意显示冗长的代码来传达有必要多次迭代 df 的想法。这两种解决方案都相对复杂,但仍然达到了预期的效果。代码有很多方面可以改进,任何重构代码的帮助都值得赞赏

我将使用这个虚拟的“data.csv”文件来举例说明我的解决方案。 将 data.csv 保存在与脚本相同的目录中,您只需复制并粘贴解决方案并运行它们即可。

sorted1,sorted2,sorted3,othe1,other2,other3,other4 
1, 1, 1, 'a', 'a', 'a', 'a'  
1, 1, 1, 'a', 'a', 'a', 'a'
1, 1, 1, 'a', 'a', 'a', 'a'
1, 1, 1, 'a', 'a', 'a', 'a'  
2, 1, 1, 'a', 'a', 'a', 'a'  
2, 1, 1, 'd', 'd', 'd', 'd'
2, 1, 1, 'd', 'd', 'd', 'a'   
3, 1, 1, 'e', 'e', 'e', 'e'  
3, 1, 1, 'b', 'b', 'b', 'b'  

我们可以存储所有组 key 的场景的初始解决方案:

首先累加组中的所有行,然后进行处理。

基本上我会这样做: 对于 df(chunks) 中的每次迭代,选择一组(如果内存允许,则选择多个组)。通过查找已处理组键的字典来检查它是否尚未处理,然后通过迭代每个 block 来累积每个 block 中选定的组行。当所有 block 迭代完成后,处理组数据。

import pandas as pd
def run_logic(key, group):
    # some logic
    pass
def accumulate_nextGroup(alreadyProcessed_groups):
    past_accumulated_group = pd.DataFrame()
    pastChunk_groupKey = None
    for chunk_index, chunk_df in enumerate(pd.read_csv("data.csv",iterator=True, chunksize=3)):
            groupby_data = chunk_df.groupby(sorted_columns, sort=True) 
            for currentChunk_groupKey, currentChunk_group in groupby_data:
                if (pastChunk_groupKey is None or pastChunk_groupKey == currentChunk_groupKey)\
                        and currentChunk_groupKey not in alreadyProcessed_groups.keys():
                    pastChunk_groupKey = currentChunk_groupKey
                    past_accumulated_group = pd.concat(
                            [past_accumulated_group, currentChunk_group]
                                                      )
                    print(f'I am the choosen group({currentChunk_groupKey}) of the moment in the chunk {chunk_index+1}')
                else: 
                    if currentChunk_groupKey in alreadyProcessed_groups:
                        print(f'group({currentChunk_groupKey}) is  not the choosen group because it was already processed')
                    else:
                        print(f'group({currentChunk_groupKey}) is  not the choosen group({pastChunk_groupKey}) yet :(')
    return pastChunk_groupKey, past_accumulated_group

alreadyProcessed_groups = {}
sorted_columns = ["sorted1","sorted2","sorted3"]
number_of_unique_groups = 3 # 
for iteration_in_df in range(number_of_unique_groups):  
    groupKey, groupData = accumulate_nextGroup(alreadyProcessed_groups)
    run_logic(groupKey, groupData)
    alreadyProcessed_groups[groupKey] = "Already Processed"
    print(alreadyProcessed_groups)
    print(f"end of {iteration_in_df+1} iterations in df")
    print("*"*50)

输出解决方案 1:

I am the choosen group((1, 1, 1)) of the moment in the chunk 1
I am the choosen group((1, 1, 1)) of the moment in the chunk 2
group((2, 1, 1)) is  not the choosen group((1, 1, 1)) yet :(
group((2, 1, 1)) is  not the choosen group((1, 1, 1)) yet :(
group((3, 1, 1)) is  not the choosen group((1, 1, 1)) yet :(
{(1, 1, 1): 'Already Processed'}
end of 1 iterations in df
**************************************************
group((1, 1, 1)) is  not the choosen group because it was already processed
group((1, 1, 1)) is  not the choosen group because it was already processed
I am the choosen group((2, 1, 1)) of the moment in the chunk 2
I am the choosen group((2, 1, 1)) of the moment in the chunk 3
group((3, 1, 1)) is  not the choosen group((2, 1, 1)) yet :(
{(1, 1, 1): 'Already Processed', (2, 1, 1): 'Already Processed'}
end of 2 iterations in df
**************************************************
group((1, 1, 1)) is  not the choosen group because it was already processed
group((1, 1, 1)) is  not the choosen group because it was already processed
group((2, 1, 1)) is  not the choosen group because it was already processed
group((2, 1, 1)) is  not the choosen group because it was already processed
I am the choosen group((3, 1, 1)) of the moment in the chunk 3
{(1, 1, 1): 'Already Processed', (2, 1, 1): 'Already Processed', (3, 1, 1): 'Already Processed'}
end of 3 iterations in df
**************************************************

更新解决方案 2:在我们无法将所有组键存储在字典中的情况下:

在我们无法将所有组键存储在字典中的情况下,我们需要使用在每个 block 中创建的每个组相对索引来为每个组创建全局引用索引。 (请注意,此解决方案比前一个解决方案密集得多)

此解决方案的要点是我们不需要组键值来识别组。 更深入地说,您可以将每个 block 想象为反向链表中的一个节点,其中第一个 block 指向 null,第二个 block 指向第一个 block ,依此类推...数据帧上的一次迭代对应于该链表中的一次遍历。对于每个步骤(处理一个 block ),每次需要保留的唯一信息是前一个 block 的头、尾和大小,并且只有使用这些信息,您才能为任何 block 中的组键分配唯一的索引标识符。

其他重要信息是,由于文件已排序, block 的第一个元素的引用索引将是最后一个元素的前一个 block 的最后一个元素 + 1。这使得可以从 block 索引推断全局引用索引.

import pandas as pd
import pysnooper
def run_logic(key, group):
    # some logic
    pass

def generate_currentChunkGroups_globalReferenceIdx(groupby_data,
        currentChunk_index, previousChunk_link):
    if currentChunk_index == 0:
        groupsIn_firstChunk=len(groupby_data.groups.keys())
        currentGroups_globalReferenceIdx = [(i,groupKey) 
                for i,(groupKey,_) in enumerate(groupby_data)]
    else:
        lastChunk_firstGroup, lastChunk_lastGroup, lastChunk_nGroups \
                = previousChunk_link 
        currentChunk_firstGroupKey = list(groupby_data.groups.keys())[0] 
        currentChunk_nGroups = len(groupby_data.groups.keys())

        lastChunk_lastGroupGlobalIdx, lastChunk_lastGroupKey \
                = lastChunk_lastGroup
        if currentChunk_firstGroupKey == lastChunk_lastGroupKey:
            currentChunk_firstGroupGlobalReferenceIdx =  lastChunk_lastGroupGlobalIdx
        else:
            currentChunk_firstGroupGlobalReferenceIdx =  lastChunk_lastGroupGlobalIdx + 1

        currentGroups_globalReferenceIdx = [
                (currentChunk_firstGroupGlobalReferenceIdx+i, groupKey)
                    for (i,groupKey) in enumerate(groupby_data.groups.keys())
                    ]

    next_previousChunk_link = (currentGroups_globalReferenceIdx[0],
            currentGroups_globalReferenceIdx[-1],
            len(currentGroups_globalReferenceIdx)
    )
    return currentGroups_globalReferenceIdx, next_previousChunk_link   

def accumulate_nextGroup(countOf_alreadyProcessedGroups, lastChunk_index, dataframe_accumulator):
    previousChunk_link = None
    currentIdx_beingProcessed = countOf_alreadyProcessedGroups
    for chunk_index, chunk_df in enumerate(pd.read_csv("data.csv",iterator=True, chunksize=3)):
        print(f'ITER:{iteration_in_df} CHUNK:{chunk_index} InfoPrevChunk:{previousChunk_link} lastProcessed_chunk:{lastChunk_index}')
        if (lastChunk_index !=  None) and (chunk_index < lastChunk_index):
            groupby_data = chunk_df.groupby(sorted_columns, sort=True) 
            currentChunkGroups_globalReferenceIdx, next_previousChunk_link \
                    = generate_currentChunkGroups_globalReferenceIdx(
                            groupby_data, chunk_index, previousChunk_link
                            )
        elif((lastChunk_index == None) or (chunk_index >= lastChunk_index)):
            if (chunk_index == lastChunk_index):
                groupby_data = chunk_df.groupby(sorted_columns, sort=True) 
                currentChunkGroups_globalReferenceIdx, next_previousChunk_link \
                        = generate_currentChunkGroups_globalReferenceIdx(
                                groupby_data, chunk_index, previousChunk_link
                                )
                currentChunkGroupGlobalIndexes = [GlobalIndex \
                        for (GlobalIndex,_) in currentChunkGroups_globalReferenceIdx]
                if((lastChunk_index is None) or (lastChunk_index <= chunk_index)):
                    lastChunk_index = chunk_index
                if currentIdx_beingProcessed in currentChunkGroupGlobalIndexes:
                    currentGroupKey_beingProcessed = [tup 
                            for tup in currentChunkGroups_globalReferenceIdx
                            if tup[0] == currentIdx_beingProcessed][0][1]
                    currentChunk_group = groupby_data.get_group(currentGroupKey_beingProcessed)
                    dataframe_accumulator = pd.concat(
                            [dataframe_accumulator, currentChunk_group]
                                                     )
            else: 
                groupby_data = chunk_df.groupby(sorted_columns, sort=True) 
                currentChunkGroups_globalReferenceIdx, next_previousChunk_link \
                        = generate_currentChunkGroups_globalReferenceIdx(
                                groupby_data, chunk_index, previousChunk_link
                                )
                currentChunkGroupGlobalIndexes = [GlobalIndex \
                        for (GlobalIndex,_) in currentChunkGroups_globalReferenceIdx]
                if((lastChunk_index is None) or (lastChunk_index <= chunk_index)):
                    lastChunk_index = chunk_index
                if currentIdx_beingProcessed in currentChunkGroupGlobalIndexes:
                    currentGroupKey_beingProcessed = [tup 
                            for tup in currentChunkGroups_globalReferenceIdx
                            if tup[0] == currentIdx_beingProcessed][0][1]
                    currentChunk_group = groupby_data.get_group(currentGroupKey_beingProcessed)
                    dataframe_accumulator = pd.concat(
                            [dataframe_accumulator, currentChunk_group]
                                                     )
                else:
                    countOf_alreadyProcessedGroups+=1
                    lastChunk_index = chunk_index-1
                    break
        previousChunk_link = next_previousChunk_link
    print(f'Done with chunks for group of global index:{currentIdx_beingProcessed} corresponding to groupKey:{currentGroupKey_beingProcessed}')
    return countOf_alreadyProcessedGroups, lastChunk_index, dataframe_accumulator, currentGroupKey_beingProcessed

sorted_columns = ["sorted1","sorted2","sorted3"]
number_of_unique_groups = 3 # 
lastChunk_index = None 
for iteration_in_df in range(number_of_unique_groups):  
    dataframe_accumulator = pd.DataFrame()
    countOf_alreadyProcessedGroups,lastChunk_index, group_data, currentGroupKey_Processed=\
            accumulate_nextGroup(
                    iteration_in_df, lastChunk_index, dataframe_accumulator
                                )
    run_logic(currentGroupKey_Processed, dataframe_accumulator)
    print(f"end of iteration number {iteration_in_df+1} in the df and processed {currentGroupKey_Processed}")
    print(group_data)
    print("*"*50)

输出解决方案 2:

ITER:0 CHUNK:0 InfoPrevChunk:None lastProcessed_chunk:None
ITER:0 CHUNK:1 InfoPrevChunk:((0, (1, 1, 1)), (0, (1, 1, 1)), 1) lastProcessed_chunk:0
ITER:0 CHUNK:2 InfoPrevChunk:((0, (1, 1, 1)), (1, (2, 1, 1)), 2) lastProcessed_chunk:1
Done with chunks for group of global index:0 corresponding to groupKey:(1, 1, 1)
end of iteration number 1 in the df and processed (1, 1, 1)
   sorted1  sorted2  sorted3 othe1 other2 other3 other4 
0        1        1        1   'a'    'a'    'a'   'a'  
1        1        1        1   'a'    'a'    'a'     'a'
2        1        1        1   'a'    'a'    'a'     'a'
3        1        1        1   'a'    'a'    'a'   'a'  
**************************************************
ITER:1 CHUNK:0 InfoPrevChunk:None lastProcessed_chunk:1
ITER:1 CHUNK:1 InfoPrevChunk:((0, (1, 1, 1)), (0, (1, 1, 1)), 1) lastProcessed_chunk:1
ITER:1 CHUNK:2 InfoPrevChunk:((0, (1, 1, 1)), (1, (2, 1, 1)), 2) lastProcessed_chunk:1
Done with chunks for group of global index:1 corresponding to groupKey:(2, 1, 1)
end of iteration number 2 in the df and processed (2, 1, 1)
   sorted1  sorted2  sorted3 othe1 other2 other3  other4 
4        2        1        1   'a'    'a'    'a'    'a'  
5        2        1        1   'd'    'd'    'd'   'd'   
6        2        1        1   'd'    'd'    'd'   'a'   
**************************************************
ITER:2 CHUNK:0 InfoPrevChunk:None lastProcessed_chunk:2
ITER:2 CHUNK:1 InfoPrevChunk:((0, (1, 1, 1)), (0, (1, 1, 1)), 1) lastProcessed_chunk:2
ITER:2 CHUNK:2 InfoPrevChunk:((0, (1, 1, 1)), (1, (2, 1, 1)), 2) lastProcessed_chunk:2
Done with chunks for group of global index:2 corresponding to groupKey:(3, 1, 1)
end of iteration number 3 in the df and processed (3, 1, 1)
   sorted1  sorted2  sorted3 othe1 other2 other3 other4 
7        3        1        1   'e'    'e'    'e'   'e'  
8        3        1        1   'b'    'b'    'b'    'b' 
**************************************************

关于python - 如何对排序文件进行分组并保留组顺序,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60723236/

相关文章:

python - 具有固定集合约束的 Python 投资组合选择

python - 讨论 Leetcode 问题最小难度的工作计划的更好方法

python - 在Python中使用行中的多个csv文件

java - 归并排序将重复数据添加到数组中

java - 如何对java集合中用户定义的条件列表进行排序

java - 对排序列表中的元素求和,然后将求和值添加到列表中

Python:嵌套循环

python - 如何解决 cron 中的权限被拒绝错误?

python - 如何从具有预设条件的数据框中随机抽取一定数量的行?

python - 根据字符串是否由特定字母组成来过滤数据框