SQL 查询运行很慢 - postgres

标签 sql postgresql postgresql-9.4 postgresql-9.5

此查询当前需要 4 分钟才能运行:

with name1 as (
  select col1 as a1, col2 as a2, sum(FEE) as a3
  from s1, date
  where return_date = datesk and year = 2000
  group by col1, col2
)
select  c_id
from name1 ala1, ss, cc
where ala1.a3 > (
    select avg(a3) * 1.2 from name1 ctr2
    where ala1.a2 = ctr2.a2
  )
  and s_sk = ala1.a2
  and s_state = 'TN'
  and ala1.a1 = c_sk
order by c_id
limit 100;

我已经设置了 work_mem='1000MB'enable-neSTLoop=off

此查询的 EXPLAIN ANALYZE 是:http://explain.depesz.com/s/DUa

QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------------------------------------
--------------------
 Limit  (cost=59141.02..59141.09 rows=28 width=17) (actual time=253707.928..253707.940 rows=100 loops=1)
   CTE name1
     ->  HashAggregate  (cost=11091.33..11108.70 rows=1390 width=14) (actual time=105.223..120.358 rows=50441 loops=1)
           Group Key: s1.col1, s1.col2
           ->  Hash Join  (cost=2322.69..11080.90 rows=1390 width=14) (actual time=10.390..79.897 rows=55820 loops=1)
                 Hash Cond: (s1.return_date = date.datesk)
                 ->  Seq Scan on s1  (cost=0.00..7666.14 rows=287514 width=18) (actual time=0.005..33.801 rows=287514 loops=1)
                 ->  Hash  (cost=2318.11..2318.11 rows=366 width=4) (actual time=10.375..10.375 rows=366 loops=1)
                       Buckets: 1024  Batches: 1  Memory Usage: 13kB
                       ->  Seq Scan on date  (cost=0.00..2318.11 rows=366 width=4) (actual time=5.224..10.329 rows=366 loops=1)
                             Filter: (year = 2000)
                             Rows Removed by Filter: 72683
   ->  Sort  (cost=48032.32..48032.39 rows=28 width=17) (actual time=253707.923..253707.930 rows=100 loops=1)
         Sort Key: cc.c_id
         Sort Method: top-N heapsort  Memory: 32kB
         ->  Hash Join  (cost=43552.37..48031.65 rows=28 width=17) (actual time=253634.511..253696.291 rows=18976 loops=1)
               Hash Cond: (cc.c_sk = ala1.a1)
               ->  Seq Scan on cc  (cost=0.00..3854.00 rows=100000 width=21) (actual time=0.009..18.527 rows=100000 loops=1)
               ->  Hash  (cost=43552.02..43552.02 rows=28 width=4) (actual time=253634.420..253634.420 rows=18976 loops=1)
                     Buckets: 1024  Batches: 1  Memory Usage: 668kB
                     ->  Hash Join  (cost=1.30..43552.02 rows=28 width=4) (actual time=136.819..253624.375 rows=18982 loops=1)
                           Hash Cond: (ala1.a2 = ss.s_sk)
                           ->  CTE Scan on name1 ala1  (cost=0.00..43548.70 rows=463 width=8) (actual time=136.756..253610.817 rows=18982 loops=1)
                                 Filter: (a3 > (SubPlan 2))
                                 Rows Removed by Filter: 31459
                                 SubPlan 2
                                   ->  Aggregate  (cost=31.29..31.31 rows=1 width=32) (actual time=5.025..5.025 rows=1 loops=50441)
                                         ->  CTE Scan on name1 ctr2  (cost=0.00..31.27 rows=7 width=32) (actual time=0.032..3.860 rows=8241 loops=50441)
                                               Filter: (ala1.a2 = a2)
                                               Rows Removed by Filter: 42200
                           ->  Hash  (cost=1.15..1.15 rows=12 width=4) (actual time=0.036..0.036 rows=12 loops=1)
                                 Buckets: 1024  Batches: 1  Memory Usage: 1kB
                                 ->  Seq Scan on ss  (cost=0.00..1.15 rows=12 width=4) (actual time=0.025..0.033 rows=12 loops=1)
                                       Filter: (s_state = 'TN'::bpchar)
 Planning time: 0.316 ms
 Execution time: 253708.351 ms
(36 rows)

启用 enable_neSTLoop=on; EXPLAIN ANLYZE 结果是:http://explain.depesz.com/s/NPo

QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------------------------------------
--------------
 Limit  (cost=54916.36..54916.43 rows=28 width=17) (actual time=257869.004..257869.015 rows=100 loops=1)
   CTE name1
     ->  HashAggregate  (cost=11091.33..11108.70 rows=1390 width=14) (actual time=92.354..104.103 rows=50441 loops=1)
           Group Key: s1.col1, s1.col2
           ->  Hash Join  (cost=2322.69..11080.90 rows=1390 width=14) (actual time=9.371..68.156 rows=55820 loops=1)
                 Hash Cond: (s1.return_date = date.datesk)
                 ->  Seq Scan on s1  (cost=0.00..7666.14 rows=287514 width=18) (actual time=0.011..25.637 rows=287514 loops=1)
                 ->  Hash  (cost=2318.11..2318.11 rows=366 width=4) (actual time=9.343..9.343 rows=366 loops=1)
                       Buckets: 1024  Batches: 1  Memory Usage: 13kB
                       ->  Seq Scan on date  (cost=0.00..2318.11 rows=366 width=4) (actual time=4.796..9.288 rows=366 loops=1)
                             Filter: (year = 2000)
                             Rows Removed by Filter: 72683
   ->  Sort  (cost=43807.66..43807.73 rows=28 width=17) (actual time=257868.994..257868.998 rows=100 loops=1)
         Sort Key: cc.c_id
         Sort Method: top-N heapsort  Memory: 32kB
         ->  Nested Loop  (cost=0.29..43806.98 rows=28 width=17) (actual time=120.358..257845.941 rows=18976 loops=1)
               ->  Nested Loop  (cost=0.00..43633.22 rows=28 width=4) (actual time=120.331..257692.654 rows=18982 loops=1)
                     Join Filter: (ala1.a2 = ss.s_sk)
                     Rows Removed by Join Filter: 208802
                     ->  CTE Scan on name1 ala1  (cost=0.00..43548.70 rows=463 width=8) (actual time=120.316..257652.636 rows=18982 loops=1)
                           Filter: (a3 > (SubPlan 2))
                           Rows Removed by Filter: 31459
                           SubPlan 2
                             ->  Aggregate  (cost=31.29..31.31 rows=1 width=32) (actual time=5.105..5.105 rows=1 loops=50441)
                                   ->  CTE Scan on name1 ctr2  (cost=0.00..31.27 rows=7 width=32) (actual time=0.032..3.952 rows=8241 loops=50441)
                                         Filter: (ala1.a2 = a2)
                                         Rows Removed by Filter: 42200
                     ->  Materialize  (cost=0.00..1.21 rows=12 width=4) (actual time=0.000..0.001 rows=12 loops=18982)
                           ->  Seq Scan on ss  (cost=0.00..1.15 rows=12 width=4) (actual time=0.007..0.012 rows=12 loops=1)
                                 Filter: (s_state = 'TN'::bpchar)
               ->  Index Scan using cc_pkey on cc  (cost=0.29..6.20 rows=1 width=21) (actual time=0.007..0.007 rows=1 loops=18982)
                     Index Cond: (c_sk = ala1.a1)
 Planning time: 0.453 ms
 Execution time: 257869.554 ms
(34 rows)

许多其他查询在 enable_neSTLoop=off 时运行很快,对于这个查询没有太大区别。原始数据不是很大,所以 4 分钟太多了。我期待大约 4-5 秒。

为什么要花这么长时间!? 我在 postgres 版本 9.4 和 9.5 中都尝试过这个。是一样的。也许我可以创建 brin 索引。但我不确定要创建哪些列。

配置设置:

effective_cache_size         | 89GB 
shared_buffers               | 18GB
work_mem                     | 1000MB
maintenance_work_mem         | 500MB
checkpoint_segments          | 32
constraint_exclusion         | on
checkpoint_completion_target | 0.5 

最佳答案

就像 John Bollinger 评论的那样,您的子查询会针对主查询的每一行进行评估。但是由于您是在一个简单的列上取平均值,您可以轻松地将子查询移出到 CTE 并计算一次平均值,这应该会极大地加快速度:

with name1 as (
  select col1 as a1, col2 as a2, sum(FEE) as a3
  from s1, date
  where return_date = datesk and year = 2000
  group by col1, col2
), avg_a3_by_a2 as (
  select a2, avg(a3) * 1.2 as avg12
  from name1
  group by a2
)
select c_id
from name1, avg_a3_by_a2, ss, cc
where name1.a3 > avg_a3_by_a2.avg12
  and name1.a2 = avg_a3_by_a2.a2
  and s_sk = name1.a2
  and s_state = 'TN'
  and name1.a1 = c_sk
order by c_id
limit 100;

新的 CTE 为 a2 的每个不同值计算平均值 + 20%。

请同时使用 JOIN 语法而不是逗号分隔的 FROM 项,因为它使您的代码更具可读性。如果您开始在查询中使用别名,请在所有表和列上始终如一地使用它们。由于缺乏信息,我无法更正这两个问题。

关于SQL 查询运行很慢 - postgres,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31306422/

相关文章:

mysql - 按两个字段排序 - 索引

sql - 查找 IP 地址 block 表中的间隙

ruby-on-rails - 在 ID 存在时获取 "Unknown primary key for table"

具有空值的 json_array_elements

sql - 从不完全不同的结果中返回不同的行

SQL:NOLOCK 导致查询变慢

sql - 索引 ORDER BY 与 LIMIT 1

sql - 错误: No unique constraint matching given keys for referenced table

postgresql - 根据请求更改 PostgreSQL 日期语言

sql - Postgres : SQLQuery : Week start and End date