mysql - 如何优化查找相关性的极其缓慢的 MySQL 查询

标签 mysql query-optimization

我有一个非常慢(通常接近 60 秒)的 MySQL 查询,它试图查找用户对一项民意调查的投票方式与他们对所有先前民意调查的投票方式之间的相关性。

基本上,我们收集在给定民意调查中为某个特定选项投票的每个人的用户 ID。

然后,我们查看该小组在之前的每次民意调查中的投票情况,并将这些结果与每个人(不仅仅是该小组)在该民意调查中的投票情况进行比较。子组结果与总结果之间的差异就是偏差,该查询按偏差排序以确定最强的相关性。

查询有点困惑:

(SELECT p_id as poll_id, o_id AS option_id, description, optCount AS option_count, subgroup_percent, total_percent, ABS(total_percent - subgroup_percent) AS deviation
FROM(
   SELECT poll_id AS p_id, 
       option_id AS o_id, 
       (SELECT description FROM `option` WHERE id = o_id) AS description,
       COUNT(*) AS optCount, 
       (SELECT COUNT(*) FROM response INNER JOIN user_ids_122 ON response.user_id = user_ids_122.user_id WHERE option_id = o_id ) / 
       (SELECT COUNT(*) FROM response INNER JOIN user_ids_122 ON response.user_id = user_ids_122.user_id WHERE poll_id = p_id) AS subgroup_percent,
       (SELECT COUNT(*) FROM response WHERE option_id = o_id) / 
       (SELECT COUNT(*) FROM response WHERE poll_id = p_id) AS total_percent
   FROM response 
   INNER JOIN user_ids_122 ON response.user_id = user_ids_122.user_id 
   WHERE poll_id < '61'
   GROUP BY option_id DESC
   ) AS derived_table_122
)
ORDER BY deviation DESC, option_count DESC

请注意,user_ids_122 是之前创建的临时表,其中包含投票给选项 ID 122 的所有用户的 ID。

“响应”表大约有 65,000 行,“用户”表大约有 7,000 行,“选项”表大约有 130 行。

更新:

这是解释表...

1   PRIMARY     <derived2>  ALL     NULL    NULL    NULL    NULL    121     Using filesort
2   DERIVED     user_ids_122    ALL     NULL    NULL    NULL    NULL    74  Using temporary; Using filesort
2   DERIVED     response    ref     poll_id,user_id     user_id     4   correlated.user_ids_122.user_id     780     Using where
7   DEPENDENT SUBQUERY  response    ref     poll_id     poll_id     4   func    7800    Using index
6   DEPENDENT SUBQUERY  response    ref     option_id   option_id   4   func    7800    Using index
5   DEPENDENT SUBQUERY  user_ids_122    ALL     NULL    NULL    NULL    NULL    74   
5   DEPENDENT SUBQUERY  response    ref     poll_id,user_id     poll_id     4   func    7800    Using where
4   DEPENDENT SUBQUERY  user_ids_122    ALL     NULL    NULL    NULL    NULL    74   
4   DEPENDENT SUBQUERY  response    ref     user_id,option_id   user_id     4   correlated.user_ids_122.user_id     780     Using where
3   DEPENDENT SUBQUERY  option  eq_ref  PRIMARY     PRIMARY     4   func    1 

更新2:

“响应”表中的每一行如下所示:

id (INT)   poll_id (INT)   user_id (INT)   option_id (INT)   created (DATETIME)
7          7               1               14                2011-03-17 09:25:10

“选项”表中的每一行如下所示:

id (INT)   poll_id (INT)   text (TEXT)     description (TEXT)
14         7               No              people who dislike country music 

“用户”表中的每一行如下所示:

id (INT)   email (TEXT)         created (DATETIME)
1          <a href="https://stackoverflow.com/cdn-cgi/l/email-protection" class="__cf_email__" data-cfemail="097c7a6c7b496c71686479656c276a6664" rel="noreferrer noopener nofollow">[email protected]</a>     2011-02-15 11:16:03

最佳答案

三件事:

  • 您正在重新计算同样的事情无数次(实际上所有这些都只取决于许多行中相同的一些参数)
  • 聚合在大数据 block (JOIN)中比在小数据 block (子查询)中更高效
  • MySQL 的子查询速度非常慢。

因此,当您计算“按 option_id 计算的投票数”时(需要扫描大表),然后 你需要计算“poll_id 的投票数”,好吧,不要再次启动大表,只需使用之前的结果即可!

您可以通过 ROLLUP 来做到这一点。

这是一个在 Postgres 上运行的查询,可以满足您的需要。

为了让 MySQL 执行此操作,您需要将所有“WITH foo AS (SELECT...)”语句替换为临时表。这很容易。 MySQL 内存临时表速度很快,不要害怕使用它们,因为这将允许您重用前面步骤的结果并节省大量计算。

我已经生成了随机测试数据,似乎有效。 0.3秒内执行...

WITH 
-- users of interest : target group
uids AS (
    SELECT DISTINCT user_id 
        FROM    options 
        JOIN    responses USING (option_id)
        WHERE   poll_id=22
    ),
-- votes of everyone and target group
votes AS (
    SELECT poll_id, option_id, sum(all_votes) AS all_votes, sum(target_votes) AS target_votes
        FROM (
            SELECT option_id, count(*) AS all_votes, count(uids.user_id) AS target_votes
                FROM        responses 
                LEFT JOIN   uids USING (user_id)
                GROUP BY option_id
        ) v
        JOIN    options     USING (option_id)
        GROUP BY poll_id, option_id
    ),
-- totals for all polls (reuse previous result)
totals AS (
    SELECT poll_id, sum(all_votes) AS all_votes, sum(target_votes) AS target_votes
        FROM votes
        GROUP BY poll_id
    ),
poll_options AS (
    SELECT poll_id, count(*) AS poll_option_count
        FROM options 
        GROUP BY poll_id
    )
-- reuse previous tables to get some stats
SELECT  *, ABS(total_percent - subgroup_percent) AS deviation
    FROM (
        SELECT
            poll_id,
            option_id,
            v.target_votes / v.all_votes AS subgroup_percent,
            t.target_votes / t.all_votes AS total_percent,
            poll_option_count
        FROM votes  v
        JOIN totals t           USING (poll_id)
        JOIN poll_options po    USING (poll_id)
    ) AS foo
    ORDER BY deviation DESC, poll_option_count DESC;

                                                                                  QUERY PLAN                                                                                
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=14910.46..14910.56 rows=40 width=144) (actual time=299.844..299.862 rows=200 loops=1)
   Sort Key: (abs(((t.target_votes / t.all_votes) - (v.target_votes / v.all_votes)))), po.poll_option_count
   Sort Method:  quicksort  Memory: 52kB
   CTE uids
     ->  HashAggregate  (cost=1801.43..1850.52 rows=4909 width=4) (actual time=3.935..4.793 rows=4860 loops=1)
           ->  Nested Loop  (cost=0.00..1789.16 rows=4909 width=4) (actual time=0.029..2.555 rows=4860 loops=1)
                 ->  Seq Scan on options  (cost=0.00..3.50 rows=5 width=4) (actual time=0.008..0.032 rows=5 loops=1)
                       Filter: (poll_id = 22)
                 ->  Index Scan using responses_option_id_key on responses  (cost=0.00..344.86 rows=982 width=8) (actual time=0.012..0.298 rows=972 loops=5)
                       Index Cond: (public.responses.option_id = public.options.option_id)
   CTE votes
     ->  HashAggregate  (cost=13029.43..13032.43 rows=200 width=24) (actual time=298.255..298.317 rows=200 loops=1)
           ->  Hash Join  (cost=13019.68..13027.43 rows=200 width=24) (actual time=297.953..298.103 rows=200 loops=1)
                 Hash Cond: (public.responses.option_id = public.options.option_id)
                 ->  HashAggregate  (cost=13014.18..13017.18 rows=200 width=8) (actual time=297.839..297.879 rows=200 loops=1)
                       ->  Merge Left Join  (cost=399.13..11541.43 rows=196366 width=8) (actual time=9.301..230.467 rows=196366 loops=1)
                             Merge Cond: (public.responses.user_id = uids.user_id)
                             ->  Index Scan using responses_pkey on responses  (cost=0.00..8585.75 rows=196366 width=8) (actual time=0.015..121.971 rows=196366 loops=1)
                             ->  Sort  (cost=399.13..411.40 rows=4909 width=4) (actual time=9.281..22.044 rows=137645 loops=1)
                                   Sort Key: uids.user_id
                                   Sort Method:  quicksort  Memory: 420kB
                                   ->  CTE Scan on uids  (cost=0.00..98.18 rows=4909 width=4) (actual time=3.937..6.549 rows=4860 loops=1)
                 ->  Hash  (cost=3.00..3.00 rows=200 width=8) (actual time=0.095..0.095 rows=200 loops=1)
                       ->  Seq Scan on options  (cost=0.00..3.00 rows=200 width=8) (actual time=0.007..0.043 rows=200 loops=1)
   CTE totals
     ->  HashAggregate  (cost=5.50..8.50 rows=200 width=68) (actual time=298.629..298.640 rows=40 loops=1)
           ->  CTE Scan on votes  (cost=0.00..4.00 rows=200 width=68) (actual time=298.257..298.425 rows=200 loops=1)
   CTE poll_options
     ->  HashAggregate  (cost=4.00..4.50 rows=40 width=4) (actual time=0.091..0.101 rows=40 loops=1)
           ->  Seq Scan on options  (cost=0.00..3.00 rows=200 width=4) (actual time=0.005..0.020 rows=200 loops=1)
   ->  Hash Join  (cost=6.95..13.45 rows=40 width=144) (actual time=298.994..299.554 rows=200 loops=1)
         Hash Cond: (t.poll_id = v.poll_id)
         ->  CTE Scan on totals t  (cost=0.00..4.00 rows=200 width=68) (actual time=298.632..298.669 rows=40 loops=1)
         ->  Hash  (cost=6.45..6.45 rows=40 width=84) (actual time=0.335..0.335 rows=200 loops=1)
               ->  Hash Join  (cost=1.30..6.45 rows=40 width=84) (actual time=0.140..0.263 rows=200 loops=1)
                     Hash Cond: (v.poll_id = po.poll_id)
                     ->  CTE Scan on votes v  (cost=0.00..4.00 rows=200 width=72) (actual time=0.001..0.030 rows=200 loops=1)
                     ->  Hash  (cost=0.80..0.80 rows=40 width=12) (actual time=0.130..0.130 rows=40 loops=1)
                           ->  CTE Scan on poll_options po  (cost=0.00..0.80 rows=40 width=12) (actual time=0.093..0.119 rows=40 loops=1)
 Total runtime: 300.132 ms

关于mysql - 如何优化查找相关性的极其缓慢的 MySQL 查询,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/5952020/

相关文章:

MySQL 在大型数据集上使用 IN 更新时速度缓慢

php - 如何用PHP和MySQL创建JSON嵌套子父树(PDO方法)

php - '数据库错误 : Unable to connect to the database:Could not connect to MySQL' in Joomla

mysql - 优化MySQL查询嵌套不存在可能吗?

sql - Postgres Select ILIKE %text% 在大字符串行上很慢

mysql - 最佳 SQL 解决方案?

php - 简化了 JQuery 和 PHP 星级评分

mysql - SQL Server 是否优化常量表达式?

MySQL Get products also bought with a product/Optimise IN 查询

php - 在laravel中从字符串输入日期到mysql数据库