postgresql - PostgreSQL:一个索引快而另一个索引慢

标签 postgresql indexing

我们有一个4.5亿行的数据库,结构如下:

   uid      id_1     id_2   d1  d2  d3  d4  d5  d6  d7  d8  d9  d10 d11 d12 d13 d14 d15 d16 d17
81038392    5655067 5468882 373 117 185 152 199 173 168 138 185 159 154 34  38  50  34  41  57
81038393    5655067 5468883 374 116 184 118 170 143 144 113 164 137 138 37  39  53  37  42  60
81038394    5655067 5468884 371 118 187 118 170 143 144 105 157 131 136 32  35  47  32  39  53
81038395    5655067 5468885 370 116 184 118 170 143 144 105 157 131 136 31  35  46  31  38  53
81038396    5655067 5468886 370 117 185 118 170 143 144 105 157 131 136 29  34  44  29  37  50
81038397    5655067 5470853 368 117 185 110 163 137 140 105 157 131 136 34  36  48  34  39  55
81038398    5655067 5470854 372 119 188 118 170 143 144 113 164 137 138 34  36  49  34  40  55
81038399    5655067 5470855 360 115 182 103 151 131 136 98  145 125 131 30  34  45  30  38  51
81038400    5655067 5470856 357 112 177 103 151 131 136 98  145 125 131 30  34  45  30  37  51
81038401    5655067 5470857 356 111 176 103 151 131 136 98  145 125 131 28  33  43  28  36  50
81038402    5655067 5470858 358 113 179 103 151 131 136 98  145 125 131 31  35  46  31  38  52
81038403    5655067 5472811 344 109 173 152 199 173 168 138 185 159 154 31  36  46  31  39  52
81038404    5655068 5468882 373 117 185 152 199 173 168 138 185 159 154 34  38  50  34  41  57
81038405    5655068 5468883 374 116 184 118 170 143 144 113 164 137 138 37  39  53  37  42  60
81038406    5655068 5468884 371 118 187 118 170 143 144 105 157 131 136 32  35  47  32  39  53
81038407    5655068 5468885 370 116 184 118 170 143 144 105 157 131 136 31  35  46  31  38  53
81038408    5655068 5468886 370 117 185 118 170 143 144 105 157 131 136 29  34  44  29  37  50
81038409    5655068 5470853 368 117 185 110 163 137 140 105 157 131 136 34  36  48  34  39  55
81038410    5655068 5470854 372 119 188 118 170 143 144 113 164 137 138 34  36  49  34  40  55
81038411    5655068 5470855 360 115 182 103 151 131 136 98  145 125 131 30  34  45  30  38  51
81038412    5655068 5470856 357 112 177 103 151 131 136 98  145 125 131 30  34  45  30  37  51
81038413    5655068 5470857 356 111 176 103 151 131 136 98  145 125 131 28  33  43  28  36  50
81038414    5655068 5470858 358 113 179 103 151 131 136 98  145 125 131 31  35  46  31  38  52


我们需要经常做这样的查询:
问题1:
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM mytable WHERE id_1 = 5655067;

                                                               QUERY PLAN                                                                
-----------------------------------------------------------------------------------------------------------------------------------------
 Index Scan using id_1_idx on mytable (cost=0.57..99187.68 rows=25742 width=80) (actual time=47.081..2600.899 rows=21487 loops=1)
   Index Cond: (id_1 = 5655067)
   Buffers: shared hit=9 read=4816
   I/O Timings: read=2563.181
 Planning time: 0.151 ms
 Execution time: 2602.320 ms
(6 rows)

问题2:
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM mytable WHERE id_2 = 5670433;

                                                            QUERY PLAN                                                             
-----------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on mytable (cost=442.02..89887.42 rows=23412 width=80) (actual time=113.200..42127.512 rows=21487 loops=1)
   Recheck Cond: (id_2 = 5670433)
   Heap Blocks: exact=16988
   Buffers: shared hit=30 read=17020
   I/O Timings: read=41971.798
   ->  Bitmap Index Scan on id_2_idx  (cost=0.00..436.16 rows=23412 width=0) (actual time=104.928..104.929 rows=21487 loops=1)
         Index Cond: (id_2 = 5670433)
         Buffers: shared hit=2 read=60
         I/O Timings: read=99.235
 Planning time: 0.163 ms
 Execution time: 42132.556 ms
(11 rows)

大约有23000到25000种独特的
id_1id_2值,这两个查询将始终返回大约24000行数据。我们只读取数据,数据不会随时间变化。
问题是:
查询1大约需要3秒,这是有点多,但仍然可以忍受。
查询2需要30-40秒,这对我们来说太多了,因为该服务是交互式web服务。
我们已经索引了id_1id_2。我们还在id_1id_2上添加了一个联合索引,这是Azure PostgreSQL作为数据所在的服务平台所建议的。它没有帮助。
我的假设是,查询1速度很快,因为所有行都按顺序位于数据库中,而当使用查询2时,行总是按非顺序分布在整个数据库中。
重新构造数据以加快查询2不是一个好主意,因为这样会降低查询1的性能我知道这种数据的结构方式并不理想,但我无法控制它。有什么建议可以加快查询2到合理的速度吗?
编辑2:
创建索引语句:
CREATE INDEX id_1_idx ON mytable (id_1);
CREATE INDEX id_2_idx ON mytable (id_2);

用吸尘器清理桌子并没有改变计划。抽真空后,EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM mytable WHERE id_1 = 5655067的输出非常相似。以下是详细真空的输出:
VACUUM (VERBOSE, ANALYZE) mytable;

INFO:  vacuuming "public.mytable"
INFO:  index "mytable_pkey" now contains 461691169 row versions in 1265896 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 2695.21 s.
INFO:  index "id_1_idx" now contains 461691169 row versions in 1265912 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 1493.20 s.
INFO:  index "id_2_idx" now contains 461691169 row versions in 1265912 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 1296.06 s.
INFO:  index "mytable_id_1_id_2_idx" now contains 461691169 row versions in 1265912 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 2364.16 s.
INFO:  "mytable": found 0 removable, 389040319 nonremovable row versions in 5187205 out of 6155883 pages
DETAIL:  0 dead row versions cannot be removed yet, oldest xmin: 12767
There were 0 unused item pointers.
Skipped 0 pages due to buffer pins, 0 frozen pages.
0 pages are entirely empty.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 13560.60 s.
INFO:  analyzing "public.mytable"
INFO:  "mytable": scanned 30000 of 6155883 pages, containing 2250000 live rows and 0 dead rows; 30000 rows in sample, 461691225 estimated total rows
VACUUM

最佳答案

TL;博士
存储I/O是您的主要瓶颈+没有足够的RAM用于索引,因为您可以自己简单地计算:
对于位图堆扫描,可以计算大约2.5毫秒(41971.798毫秒内读取17020个块)的平均块读取延迟,这太慢了。
避免磁盘读取的唯一方法是大量RAM。更快的存储将使系统更具可伸缩性,因为这很可能不是数据库中唯一的查询类型,也不是唯一的表。
长版本:
读取EXPLAIN的完美输出表明,规划师所做的成本评估远远不够,性能下降来自磁盘读取。
正如您所写的,数据不会随着时间的推移而改变(因此,您事先知道值的范围),您还可以在这两列上对表进行范围划分,然后这两列只需要扫描特定的分区(使用较小的索引,读取较小的表堆)。但是,如果访问这些数据的应用程序最终访问的数据范围或多或少也不会有太大帮助。
因此,您应该考虑替换存储子系统,以便能够在应用程序的性能要求范围内处理查询。
我怀疑PostgreSQL服务器仍在HDD而不是SSD上运行。只有120M行的小测试显示了这两个索引的以下特性:

create table nums (uid integer primary key, id_1 integer, id_2 integer, d1 integer, d2 integer, d3 integer, d4 integer, d5 integer, d6 integer, d7 integer, d8 integer, d9 integer, d10 integer, d11 integer, d12 integer, d13 integer, d14 integer, d15 integer, d16 integer, d17 integer);

INSERT INTO nums select generate_series(80000001, 200000000) AS uid, (random() * 23000)::integer + 5600000 AS id_1, (random() * 25000)::integer + 5600000 AS id_2, (random() * 1000)::integer AS d1, (random() * 1000)::integer AS d2, (random() * 1000)::integer AS d3, (random() * 1000)::integer AS d4, (random() * 1000)::integer AS d5, (random() * 1000)::integer AS d6, (random() * 1000)::integer AS d7, (random() * 1000)::integer AS d8, (random() * 1000)::integer AS d9, (random() * 1000)::integer AS d10, (random() * 1000)::integer AS d11, (random() * 100)::integer AS d12, (random() * 100)::integer AS d13, (random() * 100)::integer AS d14, (random() * 100)::integer AS d15, (random() * 100)::integer AS d16, (random() * 100)::integer AS d17;

create index id_1_idx on nums (id_1);
create index id_2_idx on nums (id_2);
cluster nums using id_1_idx;

…导致以下结果(两个冷读):
explain (analyze, buffers) select * from nums where id_1 = 5606001;
                                                       QUERY PLAN                                                        
-------------------------------------------------------------------------------------------------------------------------
 Index Scan using id_1_idx on nums  (cost=0.57..5816.92 rows=5198 width=80) (actual time=1.680..6.394 rows=5185 loops=1)
   Index Cond: (id_1 = 5606001)
   Buffers: shared read=88
   I/O Timings: read=4.397
 Planning Time: 4.002 ms
 Execution Time: 7.475 ms
(6 rows)

Time: 15.924 ms

…对于id_2
explain (analyze, buffers) select * from nums where id_2 = 5606001; 
                                                        QUERY PLAN                                                         
---------------------------------------------------------------------------------------------------------------------------
 Index Scan using id_2_idx on nums  (cost=0.57..5346.53 rows=4777 width=80) (actual time=0.376..985.689 rows=4748 loops=1)
   Index Cond: (id_2 = 5606001)
   Buffers: shared hit=1 read=4755
   I/O Timings: read=972.555
 Planning Time: 0.203 ms
 Execution Time: 986.590 ms
(6 rows)

Time: 987.296 ms

所以虽然我的表是“仅仅”12gib+3x2.5gib(PK+2索引)仍然足够快。
如果服务器已经在SSD上运行,请确保(物理上)为WAL/log、表数据(表空间)、索引(表空间)分离数据存储,以尽可能从并行中获益,并减少同一系统上其他服务/应用程序造成的I/O干扰。
还要考虑一个服务器系统,它的表和索引数据的内存要大得多(对于这个大约48 GiB的表+每个索引大约10 GiB,假设所有integer列都是),然后进行一次预热,将数据从磁盘推送到内存中。至少索引应该能够完全保留在内存中。
编辑:
我的服务器不使用位图(索引+堆)扫描的原因是我在SSD上运行,并且我已经调整了从默认的41.1的随机页面开销。当然,对于硬盘系统来说,这毫无意义。
编辑2:
对这种情况的重新测试揭示了一种有趣的行为:
在我的测试中,我假设第一列uid是主键列,并且是serial(顺序整数),根据它,条目最初在磁盘上排序。在生成数据时,有趣的索引列id_1id_2的值都是随机生成的,这通常是大型表的最坏情况。
然而,在这种情况下并非如此。在创建测试数据和索引之后,在分析表之后,但在使用列id_1上的索引重新排序数据之前,我现在得到这些结果:
explain (analyze, buffers) select * from nums where id_1 = 5606001;
                                                       QUERY PLAN                                                       
------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on nums  (cost=63.32..7761.68 rows=5194 width=80) (actual time=1.978..41.007 rows=5210 loops=1)
   Recheck Cond: (id_1 = 5606001)
   Heap Blocks: exact=5198
   Buffers: shared read=5217
   I/O Timings: read=28.732
   ->  Bitmap Index Scan on id_1_idx  (cost=0.00..62.02 rows=5194 width=0) (actual time=1.176..1.176 rows=5210 loops=1)
         Index Cond: (id_1 = 5606001)
         Buffers: shared read=19
         I/O Timings: read=0.124
 Planning Time: 7.214 ms
 Execution Time: 41.419 ms
(11 rows)

…和:
explain (analyze, buffers) select * from nums where id_2 = 5606001;
                                                       QUERY PLAN                                                       
------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on nums  (cost=58.52..7133.04 rows=4768 width=80) (actual time=7.305..43.830 rows=4813 loops=1)
   Recheck Cond: (id_2 = 5606001)
   Heap Blocks: exact=4805
   Buffers: shared hit=12 read=4810
   I/O Timings: read=28.181
   ->  Bitmap Index Scan on id_2_idx  (cost=0.00..57.33 rows=4768 width=0) (actual time=5.102..5.102 rows=4813 loops=1)
         Index Cond: (id_2 = 5606001)
         Buffers: shared read=17
         I/O Timings: read=2.414
 Planning Time: 0.227 ms
 Execution Time: 44.197 ms
(11 rows)

此处提供所有计划+优化:
using id_1_idx
using id_2_idx
我还遵循了自己的最佳实践,将索引分离到不同物理SSD上的另一个表空间。
如我们所见,要获取~5000个结果行,它必须在这里或多或少地读取相同数量的块,在这两种情况下都使用位图堆扫描。
在这种情况下,两列的相关性:
 attname | correlation | n_distinct 
---------+-------------+------------
 id_1    |  -0.0047043 |      23003
 id_2    |  0.00157998 |      25004

现在,在CLUSTER ... USING id_1_idx之后并在重新分析之后重新测试查询,结果如下所示:
 attname | correlation  | n_distinct 
---------+--------------+------------
 id_1    |            1 |      22801
 id_2    | -0.000898521 |      24997

……表现如下:
explain (analyze, buffers) select * from nums where id_1 = 5606001;
                                                       QUERY PLAN                                                       
------------------------------------------------------------------------------------------------------------------------
 Index Scan using id_1_idx on nums  (cost=0.57..179.02 rows=5083 width=80) (actual time=2.604..5.256 rows=5210 loops=1)
   Index Cond: (id_1 = 5606001)
   Buffers: shared read=90
   I/O Timings: read=4.107
 Planning Time: 4.039 ms
 Execution Time: 5.563 ms
(6 rows)

……这比预期的要好得多,但是:
explain (analyze, buffers) select * from nums where id_2 = 5606001;
                                                       QUERY PLAN                                                       
------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on nums  (cost=58.57..7140.12 rows=4775 width=80) (actual time=5.866..99.707 rows=4813 loops=1)
   Recheck Cond: (id_2 = 5606001)
   Heap Blocks: exact=4806
   Buffers: shared read=4823
   I/O Timings: read=31.389
   ->  Bitmap Index Scan on id_2_idx  (cost=0.00..57.38 rows=4775 width=0) (actual time=2.992..2.992 rows=4813 loops=1)
         Index Cond: (id_2 = 5606001)
         Buffers: shared read=17
         I/O Timings: read=0.338
 Planning Time: 0.210 ms
 Execution Time: 100.155 ms
(11 rows)

……速度是第一次随机运行时的两倍多,尽管读取的块数几乎与第一次随机运行时的块数完全相同。
为什么速度慢了这么多?
使用indexid_1_idx对表数据进行物理重新排序也会影响列的物理顺序。现在,位图堆扫描的目的是从位图索引扫描中获取按物理(磁盘)顺序读取的块列表。在第一种情况下(随机),很有可能多个行与位于磁盘上连续块中的条件匹配,从而减少随机磁盘访问。
有趣的是(但这可能只是因为我在SSD上运行),禁用位图扫描会显示可接受的数字:
explain (analyze, buffers) select * from nums where id_2 = 5606001;
                                                        QUERY PLAN                                                        
--------------------------------------------------------------------------------------------------------------------------
 Index Scan using id_2_idx on nums  (cost=0.57..7257.12 rows=4775 width=80) (actual time=0.151..35.453 rows=4813 loops=1)
   Index Cond: (id_2 = 5606001)
   Buffers: shared read=4823
   I/O Timings: read=30.051
 Planning Time: 1.927 ms
 Execution Time: 35.810 ms
(6 rows)

所有这些数字几乎都是冷启动执行(如您所见,没有或非常低的Buffers: shared hit数字。
有趣的是,对于id_2,位图扫描和索引扫描的I/O计时非常相似,但是位图扫描似乎会带来巨大的开销。

关于postgresql - PostgreSQL:一个索引快而另一个索引慢,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55001312/

相关文章:

postgresql - 如何从 plpgsql 中的多个表返回多行?

python - Django 索引页面最佳/最常见的做法

c - 如何编写一个程序来打印数组中的素数,该数组的索引也是素数

java - pg_stat_activity 中状态为 'COMMIT' 的持久 'idle' 查询

postgresql - Docker: Springboot container cannot connect to PostgreSql Container 连接报错

sql - 多维数组输出为 JSON

sql - 用于范围查找的快速表索引

python - 在Python中查找列表中匹配元素的索引

algorithm - 索引句子的最佳算法

ruby-on-rails - Postgres - 循环插入时内存不足