java - 用Java处理大文件

标签 java

我有一个要求,比如需要处理文本文件中的记录并将其插入/更新到表中。以下是我编写的代码。但是当文件中的记录达到 50,000 条时,处理记录需要超过 30 分钟,如果记录接近 80k,则会抛出内存不足错误。谁能建议一种方法来优化我编写的代码以提高性能?

public static String insertIntoCHG_PNT_Table(String FILE_NAME) throws NumberFormatException, IOException
    {
        Date DATE_INSERTED = new Date();
        String strLine = "";
        FileReader fr = new FileReader(FILE_NAME);
        BufferedReader br = new BufferedReader(fr);
        long SEQ = 0;
        double consumption = 1;
        String returnString = "";
        CHG_PNT insertObj = null;
        long KY_PREM_NO = 0;
        long KY_SPT = 0;
        String COD_COT_TYP = "";
        String DT_EFF = "";
        String TS_KY_TOT = "";
        String COD_COT = "";
        String ACL_VLE = "";
        String ACL_QTY = "";
        String WTR_VLE = "";
        String WTR_QTY = "";
        String SWG_VLE = "";
        String SWG_QTY = "";
        String CD_TYPE_ACT = "";
        String DT_TERM = "";
        String CD_STAT = "";
        String DT_STAT = "";
        String VLN_PPE_SIZ_COD = "";
        String WTR_PPE_SIZ_MTD = "";
        String SWG_PPE_SIZ_MTD = "";
        while( (strLine = br.readLine()) != null){ 
            /*
             * Meter Serial No, Property No, Current Meter Index, Previous meter index, Consumption needs to be added
             * 
             * 
             */
            String[] split = strLine.split("\\;");  
            KY_PREM_NO = Long.parseLong(split[0].trim());
            KY_SPT = Long.parseLong(split[1].trim());
            COD_COT_TYP = split[2].trim();
            DT_EFF = split[3].trim();
            TS_KY_TOT = split[4].trim();
            COD_COT = split[5].trim();
            ACL_VLE = split[6].trim();
            ACL_QTY = split[7].trim();
            WTR_VLE = split[8].trim();
            WTR_QTY = split[9].trim();
            SWG_VLE = split[10].trim();
            SWG_QTY = split[11].trim();
            CD_TYPE_ACT = split[12].trim();
            DT_TERM = split[13].trim();
            CD_STAT = split[14].trim();
            DT_STAT = split[15].trim();
            VLN_PPE_SIZ_COD = split[16].trim();
            WTR_PPE_SIZ_MTD = split[17].trim();
            SWG_PPE_SIZ_MTD = split[18].trim();

            long counter = 0;
            long newCounter = 0;
            CHG_PNT checkRecordCount = null;
            checkRecordCount = checkAndUpdateRecord(KY_PREM_NO,KY_SPT,COD_COT_TYP,TS_KY_TOT);

            try {

                if(checkRecordCount == null)
                    insertObj = new CHG_PNT();
                else
                    insertObj = checkRecordCount;
                insertObj.setKY_PREM_NO(KY_PREM_NO);
                //insertObj.setSEQ_NO(SEQ);
                insertObj.setKY_SPT(KY_SPT);
                insertObj.setCOD_COT_TYP(COD_COT_TYP);
                insertObj.setDT_EFF(DT_EFF);
                insertObj.setTS_KY_TOT(TS_KY_TOT);
                insertObj.setCOD_COT(COD_COT);
                insertObj.setACL_VLE(Double.parseDouble(ACL_VLE));
                insertObj.setACL_QTY(Double.parseDouble(ACL_QTY));
                insertObj.setWTR_VLE(Double.parseDouble(WTR_VLE));
                insertObj.setWTR_QTY(Double.parseDouble(WTR_QTY));
                insertObj.setSWG_VLE(Double.parseDouble(SWG_VLE));
                insertObj.setSWG_QTY(Double.parseDouble(SWG_QTY));
                insertObj.setCD_TYPE_ACT(CD_TYPE_ACT);
                insertObj.setDT_TERM(DT_TERM);
                insertObj.setCD_STAT(Double.parseDouble(CD_STAT));
                insertObj.setDT_STAT(DT_STAT);
                insertObj.setVLN_PPE_SIZ_COD(VLN_PPE_SIZ_COD);
                insertObj.setWTR_PPE_SIZ_MTD(WTR_PPE_SIZ_MTD);
                insertObj.setSWG_PPE_SIZ_MTD(SWG_PPE_SIZ_MTD);
                insertObj.setDATE_INSERTED(DATE_INSERTED);
                if(checkRecordCount == null)
                {
                    insertObj.setDATE_INSERTED(DATE_INSERTED);
                    insertObj.insert();
                }
                else
                {
                    insertObj.setDATE_MODIFIED(DATE_INSERTED);
                    insertObj.update();
                }
                BSF.getObjectManager()._commitTransactionDirect(true);

            }catch(Exception e)
            {
                String abc = e.getMessage();
            }

        }
        fr.close();
        br.close();
        String localPath = FILE_NAME;
        File f = new File(FILE_NAME);
        String fullPath = f.getParent();
        String fileName = f.getName();
        String SubStr1 = new String("Processing");
        int index = fullPath.lastIndexOf(SubStr1);
        String path = fullPath.substring(0, index);
        String destPath = path+"\\Archive\\"+fileName;
        PMP_PROPERTIES.copyFile(new File(localPath),new File(destPath));
        File file = new File(FILE_NAME);
        file.delete();
        return null;
    }

最佳答案

有两个主要问题。第一个是性能问题 - 并且与您的直觉相反,问题在于数据库插入速度。

您正在将每个项目插入单独的事务中。如果您希望插入速度快,则不应这样做。引入一个计数器变量,并仅在每 N 次插入和最后执行一次提交。

int commitStep = 100;
int modCount = 0;

while() {
  //... your code
  modCount++;
  if ( modCount % commitStep == 0  ) { 
    BSF.getObjectManager()._commitTransactionDirect(true);
  }
}

您可以在此处阅读有关 sql 插入加速的更多信息:Sql insert speed up

第二个问题可能是文件读取的可扩展性。它适用于较小的文件,但不适用于较大的文件。这个问题Read large files in Java对您的问题有一些很好的答案。

关于java - 用Java处理大文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/23510007/

相关文章:

java - 在 JAX-RS ContainerResponseFilter 中使用 JTA 事务,有副作用吗?

java - 在 Mockito 中模拟迭代器类时遇到问题

java - 如何将 new Date(0L) 正确转换为 LocalDate (1970-01-01)?

Java 日期与时区转换

java - 如何更新 OSGI 中的接口(interface)?

java - 客户端可以通过典型的 Java Web 服务器上的静态资源(css/javascript 文件)获得哪些特权或权利?

java - 重构以内联类用法?

java - 如何使用Scanner处理由无效输入(InputMismatchException)引起的无限循环

Java - 通过 JButton 调用方法

java - 将 ScrollView 和 Horizo​​ntalScrollView 添加到 TableLayout