我正在尝试使用 Apache Pig Latin 进行一些日志处理,我想知道是否有更简单的方法来做到这一点:
filtered_logs = FOREACH logs GENERATE numDay, reqSize, optimizedSize, origSize, compressionPct, cacheStatus;
grouped_logs = GROUP filtered_logs BY numDay;
results = FOREACH grouped_logs GENERATE group,
(SUM(filtered_logs.reqSize) + SUM(filtered_logs.optimizedSize)) / 1048576.00 AS ClientThroughputMB,
(SUM(filtered_logs.reqSize) + SUM(filtered_logs.origSize)) / 1048576.00 AS ServerThroughputMB,
SUM(filtered_logs.origSize) / 1048576.00 AS OrigMB,
SUM(filtered_logs.optimizedSize) / 1048576.00 AS OptMB,
SUM(filtered_logs.reqSize) / 1048576.00 AS SentMB,
AVG(filtered_logs.compressionPct) AS CompressionAvg,
COUNT(filtered_logs) AS NumLogs;
cache_hit_logs = FILTER filtered_logs BY cacheStatus MATCHES '.*HIT.*';
grouped_cache_hit_logs = GROUP cache_hit_logs BY numDay;
cache_hits = FOREACH grouped_cache_hit_logs GENERATE group,
COUNT(cache_hit_logs) AS cnt;
final_results = JOIN results BY group, cache_hits BY group;
DUMP final_results;
(日志已定义,它基本上是读取管道分隔的日志文件并分配字段)
我在这里想做的是计算字段cacheStatus包含“HIT”的实例数量,并计算其他数据,例如OrigMB、CompressionAvg、NumLogs等。当前代码可以工作,但似乎有巨大的性能开销。 Pig Latin 有没有一种方法可以做类似的事情(在 MSSQL 中)?
SUM(CASE CacheStatus WHEN 'HIT' THEN 1 else 0 END) as CacheHit
(基本上,我不想多次处理日志,我宁愿一次一起处理)
抱歉,如果我的问题措辞困惑,我对 Pig Latin 还很陌生。
最佳答案
没关系,我找到了自己的解决方案(愚蠢的我,忘记了我可以将语句括在大括号中):
results = FOREACH grouped_logs
{
cache_hits = FILTER filtered_logs BY cacheStatus MATCHES '.*HIT.*';
GENERATE group,
(SUM(filtered_logs.reqSize) + SUM(filtered_logs.optimizedSize)) / 1048576.00 AS ClientThroughputMB,
(SUM(filtered_logs.reqSize) + SUM(filtered_logs.origSize)) / 1048576.00 AS ServerThroughputMB,
SUM(filtered_logs.origSize) / 1048576.00 AS OrigMB,
SUM(filtered_logs.optimizedSize) / 1048576.00 AS OptMB,
SUM(filtered_logs.reqSize) / 1048576.00 AS SentMB,
AVG(filtered_logs.compressionPct) AS CompressionAvg,
COUNT(filtered_logs) AS NumLogs,
COUNT(cache_hits) AS CacheHit;
}
关于hadoop - 使用 Apache Pig Latin 对数据进行条件求和,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/6904452/