python - 列和行操作 Python Pandas

标签 python file csv pandas pandastream

这是我自己在 Pandas 中的第一个程序,我正在尝试按列和行执行一些 csv 操作。我有一个包含多个文件的过渡存储库。转换存储库不断向其中添加文件。我正在尝试动态读取文件并执行一些操作并写入另一个文件夹中的主 csv 文件。

输入

1. Folder_1: `Transition_Data`  


Test_1.csv, Test_2.csv

    Nos,Time,Count          Nos,Time,Count
    -------------------     ------------------
    2341,12:00:00,9865      1234,12:30:00,7865
    7352,12:00:00,8969      8435,12:30:00,7649

2. Folder2: Data_repository:Master_2.csv


    Nos,00:00:00
    ------------
    1234,1000
    8435,5243
    2341,563
    7352,345

3.Expected Output 

Nos,00:00:00,12:00:00,12:30:00
----------------------------------
1234,1000,0,6865
8435,5243,0,2406
2341,563,9302,0
7352,345,8624,0

阅读 Nos transition_data 文件中的列并检查位置 Nos位于Master_2.csv使用 Time 创建一个新列每次都作为新标题并减去 col[2]带有 col[1] 的 Transition_data 文件的 Master_2.csv如果数据空白用 0 填充,则在新创建的列中填充新值.我确实尝试了几个例子,但我搞砸了。

程序更新如下所述,现在在文件读取和写入的逻辑路由方面存在问题
    import pandas as pd
    import os
    import numpy as np
    import glob

path_1 = '/Transition_Data/'
path_2 = 'Data_repository/Master_2.csv'

df_1 = pd.DataFrame(dict(Nos=Nos, Time=Time, Count=Count))

pivot = pd.pivot_table(path_1, '/.*CSV, index='Nos', columns='Time', values='Count')

df_master = pd.DataFrame('Master_2.csv', {'Nos':, '00:00:00':}).set_index('Nos')

result = df_master.join(pivot, how='inner')

result[result.columns[1:]] = result[result.columns[1:]].sub(result[result.columns[0]], axis=0)

result.fillna(0)

我尝试了上面的程序并得到以下错误
Traceback (most recent call last):
  File "read_test.py", line 19, in <module>
    df = pd.read_csv(filename, header='Count')
  File "/usr/lib/python2.7/dist-packages/pandas/io/parsers.py", line 420, in parser_f
    return _read(filepath_or_buffer, kwds)
  File "/usr/lib/python2.7/dist-packages/pandas/io/parsers.py", line 218, in _read
    parser = TextFileReader(filepath_or_buffer, **kwds)
  File "/usr/lib/python2.7/dist-packages/pandas/io/parsers.py", line 502, in __init__
    self._make_engine(self.engine)
  File "/usr/lib/python2.7/dist-packages/pandas/io/parsers.py", line 610, in _make_engine
    self._engine = CParserWrapper(self.f, **self.options)
  File "/usr/lib/python2.7/dist-packages/pandas/io/parsers.py", line 972, in __init__
    self._reader = _parser.TextReader(src, **kwds)
  File "parser.pyx", line 476, in pandas.parser.TextReader.__cinit__ (pandas/parser.c:4538)
TypeError: an integer is required

最佳答案

我能看到的最简单的方法是将它们全部连接到一个 DataFrame 中,按时间对列进行排序,然后移位并减去以获得增量:

import pandas as pd
import os

path_1 = 'Transition_Data/'
path_2 = 'Data_repository/Master_2.csv'

# Read data, and combine "transition" data into 
# single joined data frame
master = pd.read_csv(path_2)
other_data = pd.concat([
        pd.read_csv(path_1 + f) for f in os.listdir(path_1)
    ])

# Index master data frame by Nos
master.set_index('Nos', inplace=True)

# Index transition data by Nos and Time
other_data.set_index(['Nos', 'Time'], inplace=True)

# Convert to series (to remove Count column heading)
# and unstack time to convert to columns
other_data = other_data['Count'].unstack('Time')

# Join the data sets on the Time axis
joined = pd.concat([master, other_data], axis=1)

# Sort the data sets by Time
joined = joined.sort_index(axis=1)

# Fill na values with data in previous period
joined = joined.fillna(method='pad',axis=1)

# Shift dataframe and subtract to get delta
delta = joined - joined.shift(axis=1).fillna(0)

print(delta)

这给出了您想要的输出:
      00:00:00  12:00:00  12:30:00
Nos                               
1234      1000         0      6865
2341       563      9302         0
7352       345      8624         0
8435      5243         0      2406

关于python - 列和行操作 Python Pandas,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31059121/

相关文章:

python - 为什么 .reverse() 不起作用?

python - 属性错误: module 'websocket' has no attribute 'WebSocketApp' pip

c - 在 C 语言中交换文件中的行

c - 解析二维字符数组

linux - 在 Linux 上,access() 是否比 stat() 快?

python - Django + Heroku 数据库

python - 使用 Numpy 获取矩阵中数组的平均值

python - 如何在 Python 中向日期时间对象添加填充零?

c# - 如何从具有多个表的数据集中生成多个 csv 文件

python - 从 sensehas IMU 上的磁力计收集数据