对于我们的一些日志,percona 工具包 pt-query-digest 工具工作正常,但对于其他日志,我们得到以下输出:
# Files: /.../mysqld_slow.log
# Overall: 0 total, 1 unique, 0 QPS, 0x concurrency ______________________
# Attribute total min max avg 95% stddev median
# ============ ======= ======= ======= ======= ======= ======= =======
# Query size 18.19M 18.19M 18.19M 18.19M 18.19M 0 18.19M
# Profile
# Rank Query ID Response time Calls R/Call Apdx V/M Ite
# ========== ========== ========== ========== ========== ==== ===== ======
$
有人知道我的日志文件有什么问题吗?它似乎是有效的,具有以下前 10 行:
Sep 28 00:00:37 gcdb-master mysqld_slow_log: SELECT `companies`.* FROM `companies` WHERE `companies`.`id` = 576 LIMIT 1;
Sep 28 00:00:37 gcdb-master mysqld_slow_log: # User@Host: db_one[db_one] @ ip-127.0.0.1.ec2.internal [127.0.0.1]
Sep 28 00:00:37 gcdb-master mysqld_slow_log: # Query_time: 0.041188 Lock_time: 0.000046 Rows_sent: 1 Rows_examined: 46418
Sep 28 00:00:37 gcdb-master mysqld_slow_log: SET timestamp=1348790434;
Sep 28 00:00:37 gcdb-master mysqld_slow_log: SELECT `companies`.* FROM `companies` WHERE `companies`.`id` = 286358 LIMIT 1;
Sep 28 00:00:37 gcdb-master mysqld_slow_log: # User@Host: db_one[db_one] @ ip-127.0.0.1.ec2.internal [127.0.0.1]
Sep 28 00:00:37 gcdb-master mysqld_slow_log: # Query_time: 0.030769 Lock_time: 0.000050 Rows_sent: 1 Rows_examined: 46583
Sep 28 00:00:37 gcdb-master mysqld_slow_log: SET timestamp=1348790434;
Sep 28 00:00:37 gcdb-master mysqld_slow_log: SELECT `companies`.* FROM `companies` WHERE `companies`.`id` = 286679 LIMIT 1;
Sep 28 00:00:37 gcdb-master mysqld_slow_log: # User@Host: db_one[db_one] @ ip-127.0.0.1.ec2.internal [127.0.0.1]
Sep 28 00:00:37 gcdb-master mysqld_slow_log: # Query_time: 0.594351 Lock_time: 0.000038 Rows_sent: 12 Rows_examined: 342673
最佳答案
我已经使用您的示例输出运行了一些测试,并且我怀疑您的文件无效。该文件是通过剪切每行的类似 syslog 的部分从您的文件中获得的,并在第一个查询之前补充了两个缺失的#description-lines,似乎可以工作
# User@Host: db_one[db_one] @ ip-127.0.0.1.ec2.internal [127.0.0.1]
# Query_time: 0.041188 Lock_time: 0.000046 Rows_sent: 1 Rows_examined: 46418
SELECT `companies`.* FROM `companies` WHERE `companies`.`id` = 576 LIMIT 1;
# User@Host: db_one[db_one] @ ip-127.0.0.1.ec2.internal [127.0.0.1]
# Query_time: 0.041188 Lock_time: 0.000046 Rows_sent: 1 Rows_examined: 46418
SET timestamp=1348790434;
SELECT `companies`.* FROM `companies` WHERE `companies`.`id` = 286358 LIMIT 1;
# User@Host: db_one[db_one] @ ip-127.0.0.1.ec2.internal [127.0.0.1]
# Query_time: 0.030769 Lock_time: 0.000050 Rows_sent: 1 Rows_examined: 46583
SET timestamp=1348790434;
SELECT `companies`.* FROM `companies` WHERE `companies`.`id` = 286679 LIMIT 1;
# User@Host: db_one[db_one] @ ip-127.0.0.1.ec2.internal [127.0.0.1]
# Query_time: 0.594351 Lock_time: 0.000038 Rows_sent: 12 Rows_examined: 342673
与同一个文件一样,我删除了第一行,以便以 #description 行开始,并输出:
# 240ms user time, 20ms system time, 24.59M rss, 87.74M vsz
# Current date: Fri Nov 2 22:03:02 2012
# Hostname: mintaka
# Files: orig.log
# Overall: 3 total, 1 unique, 0 QPS, 0x concurrency ______________________
# Attribute total min max avg 95% stddev median
# ============ ======= ======= ======= ======= ======= ======= =======
# Exec time 113ms 31ms 41ms 38ms 40ms 5ms 40ms
# Lock time 142us 46us 50us 47us 49us 2us 44us
# Rows sent 3 1 1 1 1 0 1
# Rows examine 136.15k 45.33k 45.49k 45.38k 44.45k 0.00 44.45k
# Query size 234 76 79 78 76.28 1.50 76.28
# Profile
# Rank Query ID Response time Calls R/Call Apdx V/M Item
# ==== ================== ============= ===== ====== ==== ===== ==========
# 1 0x0C756AF10BC44B0D 0.1131 100.0% 3 0.0377 1.00 0.00 SELECT companies
# Query 1: 0 QPS, 0x concurrency, ID 0x0C756AF10BC44B0D at byte 226 ______
# This item is included in the report because it matches --limit.
# Scores: Apdex = 1.00 [1.0]*, V/M = 0.00
# Query_time sparkline: | ^ |
# Attribute pct total min max avg 95% stddev median
# ============ === ======= ======= ======= ======= ======= ======= =======
# Count 100 3
# Exec time 100 113ms 31ms 41ms 38ms 40ms 5ms 40ms
# Lock time 100 142us 46us 50us 47us 49us 2us 44us
# Rows sent 100 3 1 1 1 1 0 1
# Rows examine 100 136.15k 45.33k 45.49k 45.38k 44.45k 0.00 44.45k
# Query size 100 234 76 79 78 76.28 1.50 76.28
# String:
# Hosts ip-127.0.0.1.ec2.internal
# Users db_one
# Query_time distribution
# 1us
# 10us
# 100us
# 1ms
# 10ms ################################################################
# 100ms
# 1s
# 10s+
# Tables
# SHOW TABLE STATUS LIKE 'companies'\G
# SHOW CREATE TABLE `companies`\G
# EXPLAIN /*!50100 PARTITIONS*/
SELECT `companies`.* FROM `companies` WHERE `companies`.`id` = 286358 LIMIT 1\G
所以我猜测问题与日志文件格式和可能的旋转有关(例如,文件被截断,导致初始#description丢失)。
此外,我还研究了一些 Percona 实用程序代码。默认解析器(slowlog)搜索#Time行来获取时间戳,我在我的慢日志中找到了它,但在你的慢日志中不存在。这本身不会影响阅读,但可能会扭曲结果。
它还使用 ";\n#"
作为输入记录分隔符,因此 syslog 格式对于默认的 --type
来说是明确的“否” 慢日志。其他似乎也不适用于 syslog+slowlog 格式。
我尝试更改输入记录分隔符并添加一个技巧来删除每行的系统日志部分,结果似乎可以工作,但我担心它不会 因为已知的慢日志上的结果不一致。
恐怕最简单的事情就是截断文件的开头,然后截断每行的开头,然后再将其提供给实用程序:
sed -e '/.*: #/,$b' -e 'd' < slow.log \
| cut -d' ' -f6- \
| pt-query-digest -
关于mysql - pt-query-digest 没有结果并且 "0 total, 1 unique"。知道如何解决这个问题吗?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/13094210/