java - 如何按天汇总?

标签 java apache-spark apache-spark-sql

我有以下 Pojo:

public class MyPojo {
   Date startDate;
   Double usageAMount;
   // ... bla bla bla
}

所以我有一个 MyPojo 对象列表,作为参数传递给函数:

public Map<Date, Double> getWeeklyCost(@NotNull List<MyPojo> reports) {
        JavaRDD<MyPojo> rdd = context.parallelize(reports);
        JavaPairRDD<Date, Double> result = rdd.mapToPair(
                (PairFunction<MyPojo, Date, Double>) x ->
                        new Tuple2<>(x.getStartDate(), x.getUsageAmount()))
                .reduceByKey((Function2<Double, Double, Double>) (x, y) -> x + y);

        return result.collectAsMap();
}

但是,我返回类似的内容:

"2017-06-28T22:00:00.000+0000": 0.02916666,
"2017-06-29T16:00:00.000+0000": 0.02916666,
"2017-06-27T13:00:00.000+0000": 0.03888888,
"2017-06-26T05:00:00.000+0000": 0.05833332000000001,
"2017-06-28T21:00:00.000+0000": 0.03888888,
"2017-06-27T02:00:00.000+0000": 0.03888888,
"2017-06-28T03:00:00.000+0000": 0.07777776000000002,
"2017-06-28T20:00:00.000+0000": 0.01944444,
"2017-06-30T04:00:00.000+0000": 0.00972222,
"2017-06-28T02:00:00.000+0000": 0.05833332000000001,
"2017-06-29T21:00:00.000+0000": 0.03888888,
"2017-06-29T23:00:00.000+0000": 0.06805554000000001,
"2017-06-27T00:00:00.000+0000": 0.05833332000000001,
"2017-06-26T06:00:00.000+0000": 0.03888888,
"2017-06-28T01:00:00.000+0000": 0.09722220000000002,
"2017-06-29T22:00:00.000+0000": 0.01944444,
"2017-06-28T00:00:00.000+0000": 0.11666664000000003,
"2017-06-27T12:00:00.000+0000": 0.01944444,
"2017-06-26T11:00:00.000+0000": 0.01944444,
"2017-06-29T03:00:00.000+0000": 0.01944444,
"2017-06-26T04:00:00.000+0000": 0.07777776000000002,
"2017-06-27T19:00:00.000+0000": 0.01944444,
"2017-06-29T20:00:00.000+0000": 0.048611100000000004,
"2017-06-29T02:00:00.000+0000": 0.02916666,
"2017-06-29T15:00:00.000+0000": 0.01944444,
"2017-06-27T17:00:00.000+0000": 0.01944444,
"2017-06-29T14:00:00.000+0000": 0.02916666,
"2017-06-30T01:00:00.000+0000": 0.02916666,
"2017-06-29T00:00:00.000+0000": 0.01944444,
"2017-06-27T18:00:00.000+0000": 0.03888888,
"2017-06-26T03:00:00.000+0000": 0.07777776000000002,
"2017-06-28T05:00:00.000+0000": 0.05833332000000001,
"2017-06-29T13:00:00.000+0000": 0.01944444,
"2017-06-30T03:00:00.000+0000": 0.00972222,
"2017-06-27T11:00:00.000+0000": 0.01944444,
"2017-06-28T04:00:00.000+0000": 0.05833332000000001,
"2017-06-29T12:00:00.000+0000": 0.00972222,
"2017-06-30T02:00:00.000+0000": 0.06805554000000001,
"2017-06-27T23:00:00.000+0000": 0.09722220000000002,
"2017-06-27T16:00:00.000+0000": 0.01944444,
"2017-06-26T15:00:00.000+0000": 0.01944444,
"2017-06-29T06:00:00.000+0000": 0.00972222,
"2017-06-30T07:00:00.000+0000": 0.00138889,
"2017-06-30T00:00:00.000+0000": 0.01944444,
"2017-06-27T21:00:00.000+0000": 0.01944444,
"2017-06-26T02:00:00.000+0000": 0.07777776000000002,
"2017-06-29T19:00:00.000+0000": 0.00972222,
"2017-06-27T03:00:00.000+0000": 0.03888888,
"2017-06-27T20:00:00.000+0000": 0.01944444,
"2017-06-30T05:00:00.000+0000": 74.1458333,
"2017-06-29T18:00:00.000+0000": 0.00972222,
"2017-06-29T17:00:00.000+0000": 0.01944444,
"2017-06-28T23:00:00.000+0000": 0.00972222,
"2017-06-27T01:00:00.000+0000": 0.01944444,
"2017-06-27T22:00:00.000+0000": 0.05833332000000001

我想返回按天聚合的数据,并按日期降序排序。 例如:

"2017-06-28T03:00:00.000+0000": 0.07777776000000002,
"2017-06-28T20:00:00.000+0000": 0.01944444,

在同一天,因此应添加它们的值 (usageAmount)。我只关心一天,而不关心时间。如何减少或聚合我的 RDD 以获得所需的结果?

**更新**答案必须是 Spark RDD 解决方案...

最佳答案

相对简单(尽管需要很多代码)

让我们从 Pojo 的实现开始:

static class Record
{
    private Date date;
    private double amount;
    public Record(Date d, double a)
    {
        this.date = d;
        this.amount = a;
    }
    @Override
    public String toString() {
        return date.toString() + "\t" + amount;
    }
}

现在有一个实用方法来检查两个记录是否在同一天:

private static boolean sameDay(Record r0, Record r1)
{
    Date d0 = r0.date;
    Date d1 = r1.date;

    Calendar cal = new GregorianCalendar();
    cal.setTime(d0);

    int[] dateParts0 = {cal.get(Calendar.DAY_OF_MONTH), cal.get(Calendar.MONTH), cal.get(Calendar.YEAR)};

    cal.setTime(d1);

    return cal.get(Calendar.DAY_OF_MONTH) == dateParts0[0] &&
            cal.get(Calendar.MONTH) == dateParts0[1] &&
            cal.get(Calendar.YEAR) == dateParts0[2];
}

既然我们已经做到了,我们就可以开始算法的主要部分了。 这里的想法是按天对输入列表进行排序。然后循环列表。 对于我们正在处理的每个条目,我们都会检查它是否与聚合数据集的最后已知日期是同一天。如果是,我们添加记录的数量,如果不是,我们添加一个新条目。

public static List<Record> aggregate(Collection<Record> rs)
{
    List<Record> tmp = new ArrayList<>(rs);
    java.util.Collections.sort(tmp, new Comparator<Record>() {
        @Override
        public int compare(Record o1, Record o2) {
            return o1.date.compareTo(o2.date);
        }
    });

    List<Record> out = new ArrayList<>();
    out.add(new Record(tmp.get(0).date, 0));
    for(int i=0;i<tmp.size();i++)
    {
        Record last = out.get(out.size() - 1);
        Record recordBeingProcessed = tmp.get(i);
        if(sameDay(last, recordBeingProcessed))
        {
            last.amount += recordBeingProcessed.amount;
        }
        else
        {
            out.add(recordBeingProcessed);
        }
    }

    return out;
}

最后,一个很好的主要方法来测试所有内容:

public static void main(String[] args) throws ParseException {
    DateFormat format = new SimpleDateFormat("MMMM d, yyyy", Locale.ENGLISH);
    String[] dateStrings = {"January 2, 2010", "January 2, 2010", "January 3, 2010"};
    List<Record> rs = new ArrayList<>();
    for(int i=0;i<dateStrings.length;i++)
    {
        rs.add(new Record(format.parse(dateStrings[i]), 1));
    }
    for(Record r : aggregate(rs))
    {
        System.out.println(r);
    }
}

打印出来:

Sat Jan 02 00:00:00 CET 2010    2.0
Sun Jan 03 00:00:00 CET 2010    1.0

关于java - 如何按天汇总?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44876688/

相关文章:

python spark 属性错误: 'module' object has no attribute 'getrusage'

json - Spark SQL DataFrame pretty-print

performance - spark.sql.shuffle.partitions 和 spark.default.parallelism 有什么区别?

java - 无需重新启动即可重新加载 Drools 规则

java - 在 for 循环中添加 JButton

apache-spark - 如何将转换后的数据从分区发送到 S3?

apache-spark - PySpark-获取组中每一行的行号

java - Java中比较字符串的三种不同方式

java - 当我在 JPA 代码中设置锁时,它们是在代码中还是在 DBMS 中强制执行?

mysql - SQL 填充常量以防连接两个表时不匹配