百度360必应搜狗淘宝本站头条
当前位置:网站首页 > 编程字典 > 正文

深度、有料!PostgreSQL 9.6 并行度的源码分析

toyiye 2024-06-21 12:19 9 浏览 0 评论

PostgreSQL 9.6的并行复制一发,相信已经有很多小伙伴已经开始测试了,我昨晚测试了一个场景是标签系统类应用的比特位运算,昨天测试发现性能相比非并行已经提升了7倍。

昨天没有仔细研究代码,发现怎么测都只能用8个并行,今天看了一下代码,终于找到端倪了,其实并行度是由几个方面决定的, 决定并行度的几个参数

.1. 最大允许的并行度

max_parallel_degree

.2. 表设置的并行度(create table或alter table设置)

parallel_degree

如果设置了表的并行度,则最终并行度取min(max_parallel_degree , parallel_degree )

/*

* Use the table parallel_degree, but don't go further than

* max_parallel_degree.

*/

parallel_degree = Min(rel->rel_parallel_degree, max_parallel_degree);

.3. 如果表没有设置并行度parallel_degree ,则根据表的大小 和 parallel_threshold 这个硬编码值决定,计算得出(见函数create_plain_partial_paths)

然后依旧受到max_parallel_degree 参数的限制,不能大于它。

代码如下

src/backend/optimizer/util/plancat.cvoidget_relation_info(PlannerInfo *root, Oid relationObjectId, bool inhparent,

RelOptInfo *rel){

... /* Retrive the parallel_degree reloption, if set. */

rel->rel_parallel_degree = RelationGetParallelDegree(relation, -1);

...

src/include/utils/rel.h/*

* RelationGetParallelDegree

* Returns the relation's parallel_degree. Note multiple eval of argument!

*/#define RelationGetParallelDegree(relation, defaultpd) \

((relation)->rd_options ? \

((StdRdOptions *) (relation)->rd_options)->parallel_degree : (defaultpd))src/backend/optimizer/path/allpaths.c/*

* create_plain_partial_paths

* Build partial access paths for parallel scan of a plain relation

*/static voidcreate_plain_partial_paths(PlannerInfo *root, RelOptInfo *rel){ int parallel_degree = 1; /*

* If the user has set the parallel_degree reloption, we decide what to do

* based on the value of that option. Otherwise, we estimate a value.

*/

if (rel->rel_parallel_degree != -1)

{ /*

* If parallel_degree = 0 is set for this relation, bail out. The

* user does not want a parallel path for this relation.

*/

if (rel->rel_parallel_degree == 0) return; /*

* Use the table parallel_degree, but don't go further than

* max_parallel_degree.

*/

parallel_degree = Min(rel->rel_parallel_degree, max_parallel_degree);

} else

{ int parallel_threshold = 1000; /*

* If this relation is too small to be worth a parallel scan, just

* return without doing anything ... unless it's an inheritance child.

* In that case, we want to generate a parallel path here anyway. It

* might not be worthwhile just for this relation, but when combined

* with all of its inheritance siblings it may well pay off.

*/

if (rel->pages < parallel_threshold &&

rel->reloptkind == RELOPT_BASEREL) return;// 表级并行度没有设置时,通过表的大小和parallel_threshold 计算并行度

/*

* Limit the degree of parallelism logarithmically based on the size

* of the relation. This probably needs to be a good deal more

* sophisticated, but we need something here for now.

*/

while (rel->pages > parallel_threshold * 3 &&

parallel_degree < max_parallel_degree)

{

parallel_degree++;

parallel_threshold *= 3; if (parallel_threshold >= PG_INT32_MAX / 3) break;

}

} /* Add an unordered partial path based on a parallel sequential scan. */

add_partial_path(rel, create_seqscan_path(root, rel, NULL, parallel_degree));

}

测试:

增加到32个并行,因为有32核。

postgres=# alter table t_bit2 set (parallel_degree =32);postgres=# explain (analyze,verbose,timing,costs,buffers) select count(*) from t_bit2 where bitand(id, '10101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010')=B'10101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010';

QUERY

PLAN

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Finalize Aggregate (cost=1551053.25..1551053.26 rows=1 width=8) (actual time=31092.551..31092.552 rows=1 loops=1)

Output: count(*)

Buffers: shared hit=1473213

-> Gather (cost=1551049.96..1551053.17 rows=32 width=8) (actual time=31060.939..31092.469 rows=33 loops=1)

Output: (PARTIAL count(*))

Workers Planned: 32

Workers Launched: 32

Buffers: shared hit=1473213

-> Partial Aggregate (cost=1550049.96..1550049.97 rows=1 width=8) (actual time=31047.074..31047.075 rows=1 loops=33)

Output: PARTIAL count(*)

Buffers: shared hit=1470589

Worker 0: actual time=31037.287..31037.288 rows=1 loops=1

Buffers: shared hit=43483

Worker 1: actual time=31035.803..31035.804 rows=1 loops=1

Buffers: shared hit=45112

Worker 2: actual time=31036.950..31036.951 rows=1 loops=1

Buffers: shared hit=43238

Worker 3: actual time=31063.823..31063.823 rows=1 loops=1

Buffers: shared hit=43931

Worker 4: actual time=31035.934..31035.935 rows=1 loops=1

Buffers: shared hit=42676

Worker 5: actual time=31035.334..31035.335 rows=1 loops=1

Buffers: shared hit=45662

Worker 6: actual time=31038.237..31038.238 rows=1 loops=1

Buffers: shared hit=44882

Worker 7: actual time=31043.767..31043.767 rows=1 loops=1

Buffers: shared hit=47740

Worker 8: actual time=31038.297..31038.297 rows=1 loops=1

Buffers: shared hit=47779

Worker 9: actual time=31056.614..31056.614 rows=1 loops=1

Buffers: shared hit=43574

Worker 10: actual time=31040.406..31040.406 rows=1 loops=1

Buffers: shared hit=48292

Worker 11: actual time=31041.379..31041.384 rows=1 loops=1

Buffers: shared hit=48654

Worker 12: actual time=31019.450..31019.451 rows=1 loops=1

Buffers: shared hit=44657

Worker 13: actual time=31040.666..31040.668 rows=1 loops=1

Buffers: shared hit=42903

Worker 14: actual time=31029.439..31029.440 rows=1 loops=1

Buffers: shared hit=51098

Worker 15: actual time=31032.364..31032.364 rows=1 loops=1

Buffers: shared hit=48112

Worker 16: actual time=31043.330..31043.330 rows=1 loops=1

Buffers: shared hit=40712

Worker 17: actual time=31052.240..31052.241 rows=1 loops=1

Buffers: shared hit=43938

Worker 18: actual time=31052.810..31052.811 rows=1 loops=1

Buffers: shared hit=46617

Worker 19: actual time=31052.894..31052.894 rows=1 loops=1

Buffers: shared hit=40536

Worker 20: actual time=31053.521..31053.521 rows=1 loops=1

Buffers: shared hit=43820

Worker 21: actual time=31054.699..31054.699 rows=1 loops=1

Buffers: shared hit=43356

Worker 22: actual time=31055.046..31055.047 rows=1 loops=1

Buffers: shared hit=44030

Worker 23: actual time=31055.070..31055.073 rows=1 loops=1

Buffers: shared hit=40500

Worker 24: actual time=31055.108..31055.108 rows=1 loops=1

Buffers: shared hit=42840

Worker 25: actual time=31054.733..31054.735 rows=1 loops=1

Buffers: shared hit=40342

Worker 26: actual time=31055.962..31055.963 rows=1 loops=1

Buffers: shared hit=44344

Worker 27: actual time=31056.279..31056.280 rows=1 loops=1

Buffers: shared hit=47810

Worker 28: actual time=31056.324..31056.325 rows=1 loops=1

Buffers: shared hit=44747

Worker 29: actual time=31056.259..31056.259 rows=1 loops=1

Buffers: shared hit=43673

Worker 30: actual time=31057.195..31057.195 rows=1 loops=1

Buffers: shared hit=40444

Worker 31: actual time=31055.871..31055.876 rows=1 loops=1

Buffers: shared hit=46439

-> Parallel Seq Scan on public.t_bit2 (cost=0.00..1549983.80 rows=26465 width=0) (actual time=0.040..17244.827 rows=6060606 loops=33)

Output: id

Filter: (bitand(t_bit2.id, B'10101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010'::"bit") = B'10101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010'::"bit")

Buffers: shared hit=1470589

Worker 0: actual time=0.035..17314.296 rows=5913688 loops=1

Buffers: shared hit=43483

Worker 1: actual time=0.030..16965.158 rows=6135232 loops=1

Buffers: shared hit=45112

Worker 2: actual time=0.040..17097.631 rows=5880368 loops=1

Buffers: shared hit=43238

Worker 3: actual time=0.026..17121.847 rows=5974616 loops=1

Buffers: shared hit=43931

Worker 4: actual time=0.057..17024.045 rows=5803936 loops=1

Buffers: shared hit=42676

Worker 5: actual time=0.048..17249.414 rows=6210032 loops=1

Buffers: shared hit=45662

Worker 6: actual time=0.039..17190.435 rows=6103952 loops=1

Buffers: shared hit=44882

Worker 7: actual time=0.043..17203.755 rows=6492640 loops=1

Buffers: shared hit=47740

Worker 8: actual time=0.046..17204.168 rows=6497944 loops=1

Buffers: shared hit=47779

Worker 9: actual time=0.026..17088.716 rows=5926064 loops=1

Buffers: shared hit=43574

Worker 10: actual time=0.041..17114.139 rows=6567712 loops=1

Buffers: shared hit=48292

Worker 11: actual time=0.038..17237.905 rows=6616944 loops=1

Buffers: shared hit=48654

Worker 12: actual time=0.138..17259.257 rows=6073352 loops=1

Buffers: shared hit=44657

Worker 13: actual time=0.060..17204.828 rows=5834808 loops=1

Buffers: shared hit=42903

Worker 14: actual time=0.041..17168.707 rows=6949328 loops=1

Buffers: shared hit=51098

Worker 15: actual time=0.034..17294.266 rows=6543232 loops=1

Buffers: shared hit=48112

Worker 16: actual time=0.037..17166.335 rows=5536832 loops=1

Buffers: shared hit=40712

Worker 17: actual time=0.033..17224.710 rows=5975568 loops=1

Buffers: shared hit=43938

Worker 18: actual time=0.027..17218.971 rows=6339912 loops=1

Buffers: shared hit=46617

Worker 19: actual time=0.034..17227.116 rows=5512896 loops=1

Buffers: shared hit=40536

Worker 20: actual time=0.033..17169.460 rows=5959520 loops=1

Buffers: shared hit=43820

Worker 21: actual time=0.033..17176.166 rows=5896416 loops=1

Buffers: shared hit=43356

Worker 22: actual time=0.024..17273.591 rows=5988080 loops=1

Buffers: shared hit=44030

Worker 23: actual time=0.026..17370.737 rows=5508000 loops=1

Buffers: shared hit=40500

Worker 24: actual time=0.029..17087.689 rows=5826240 loops=1

Buffers: shared hit=42840

Worker 25: actual time=0.060..17293.165 rows=5486512 loops=1

Buffers: shared hit=40342

Worker 26: actual time=0.024..17339.611 rows=6030680 loops=1

Buffers: shared hit=44344

Worker 27: actual time=0.035..17416.782 rows=6502160 loops=1

Buffers: shared hit=47810

Worker 28: actual time=0.033..17478.751 rows=6085592 loops=1

Buffers: shared hit=44747

Worker 29: actual time=0.037..17318.121 rows=5939528 loops=1

Buffers: shared hit=43673

Worker 30: actual time=0.058..17525.592 rows=5500384 loops=1

Buffers: shared hit=40444

Worker 31: actual time=0.031..17580.908 rows=6315704 loops=1

Buffers: shared hit=46439

Planning time: 0.354 ms

Execution time: 31157.006 ms

(145 rows)

比特位运算

postgres=# select count(*) from t_bit2 where bitand(id, '10101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010')=B'10101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010';

count

----------- 200000000(1 row)

Time: 4320.931 ms

COUNT

postgres=# select count(*) from t_bit2;

count

----------- 200000000(1 row)

Time: 1896.647 ms

关闭并行的查询效率

postgres=# set force_parallel_mode =off;SET

postgres=# alter table t_bit2 set (parallel_degree =0);ALTER TABLE

postgres=# \timing

Timing is on.

postgres=# select count(*) from t_bit2 where bitand(id, '10101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010')=B'10101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010';

count

----------- 200000000(1 row)

Time: 53098.480 ms

postgres=# select count(*) from t_bit2;

count

----------- 200000000(1 row)

Time: 18504.679 ms

表大小

postgres=# \dt+ t_bit2

List of relations

Schema | Name | Type | Owner | Size | Description

--------+--------+-------+----------+-------+-------------

public | t_bit2 | table | postgres | 11 GB |

(1 row)

参考信息

http://www.postgresql.org/docs/9.6/static/sql-createtable.html

parallel_degree (integer)

The parallel degree for a table is the number of workers that should be used to assist a parallel scan of that table. If not set, the system will determine a value based on the relation size. The actual number of workers chosen by the planner may be less, for example due to the setting of max_parallel_degree.

http://www.postgresql.org/docs/9.6/static/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER

force_parallel_mode (enum)

Allows the use of parallel queries for testing purposes even in cases where no performance benefit is expected. The allowed values of force_parallel_mode are off (use parallel mode only when it is expected to improve performance), on (force parallel query for all queries for which it is thought to be safe), and regress (like on, but with additional behavior changes as explained below).

More specifically, setting this value to on will add a Gather node to the top of any query plan for which this appears to be safe, so that the query runs inside of a parallel worker. Even when a parallel worker is not available or cannot be used, operations such as starting a subtransaction that would be prohibited in a parallel query context will be prohibited unless the planner believes that this will cause the query to fail. If failures or unexpected results occur when this option is set, some functions used by the query may need to be marked PARALLEL UNSAFE (or, possibly, PARALLEL RESTRICTED).

Setting this value to regress has all of the same effects as setting it to on plus some additional effects that are intended to facilitate automated regression testing. Normally, messages from a parallel worker include a context line indicating that, but a setting of regress suppresses this line so that the output is the same as in non-parallel execution. Also, the Gather nodes added to plans by this setting are hidden in EXPLAIN output so that the output matches what would be obtained if this setting were turned off.

http://www.postgresql.org/docs/9.6/static/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-ASYNC-BEHAVIOR

max_parallel_degree (integer)

Sets the maximum number of workers that can be started for an individual parallel operation. Parallel workers are taken from the pool of processes established by max_worker_processes. Note that the requested number of workers may not actually be available at runtime. If this occurs, the plan will run with fewer workers than expected, which may be inefficient. The default value is 2. Setting this value to 0 disables parallel query execution.

http://www.postgresql.org/docs/9.6/static/runtime-config-query.html#RUNTIME-CONFIG-QUERY-CONSTANTS

parallel_setup_cost (floating point)

Sets the planner's estimate of the cost of launching parallel worker processes. The default is 1000.

parallel_tuple_cost (floating point)

Sets the planner's estimate of the cost of transferring one tuple from a parallel worker process to another process. The default is 0.1.

欢迎关注云栖社区微信公众号:yunqiinsight,更多深度、有料的技术内容等着你。

相关推荐

为何越来越多的编程语言使用JSON(为什么编程)

JSON是JavascriptObjectNotation的缩写,意思是Javascript对象表示法,是一种易于人类阅读和对编程友好的文本数据传递方法,是JavaScript语言规范定义的一个子...

何时在数据库中使用 JSON(数据库用json格式存储)

在本文中,您将了解何时应考虑将JSON数据类型添加到表中以及何时应避免使用它们。每天?分享?最新?软件?开发?,Devops,敏捷?,测试?以及?项目?管理?最新?,最热门?的?文章?,每天?花?...

MySQL 从零开始:05 数据类型(mysql数据类型有哪些,并举例)

前面的讲解中已经接触到了表的创建,表的创建是对字段的声明,比如:上述语句声明了字段的名称、类型、所占空间、默认值和是否可以为空等信息。其中的int、varchar、char和decimal都...

JSON对象花样进阶(json格式对象)

一、引言在现代Web开发中,JSON(JavaScriptObjectNotation)已经成为数据交换的标准格式。无论是从前端向后端发送数据,还是从后端接收数据,JSON都是不可或缺的一部分。...

深入理解 JSON 和 Form-data(json和formdata提交区别)

在讨论现代网络开发与API设计的语境下,理解客户端和服务器间如何有效且可靠地交换数据变得尤为关键。这里,特别值得关注的是两种主流数据格式:...

JSON 语法(json 语法 priority)

JSON语法是JavaScript语法的子集。JSON语法规则JSON语法是JavaScript对象表示法语法的子集。数据在名称/值对中数据由逗号分隔花括号保存对象方括号保存数组JS...

JSON语法详解(json的语法规则)

JSON语法规则JSON语法是JavaScript对象表示法语法的子集。数据在名称/值对中数据由逗号分隔大括号保存对象中括号保存数组注意:json的key是字符串,且必须是双引号,不能是单引号...

MySQL JSON数据类型操作(mysql的json)

概述mysql自5.7.8版本开始,就支持了json结构的数据存储和查询,这表明了mysql也在不断的学习和增加nosql数据库的有点。但mysql毕竟是关系型数据库,在处理json这种非结构化的数据...

JSON的数据模式(json数据格式示例)

像XML模式一样,JSON数据格式也有Schema,这是一个基于JSON格式的规范。JSON模式也以JSON格式编写。它用于验证JSON数据。JSON模式示例以下代码显示了基本的JSON模式。{"...

前端学习——JSON格式详解(后端json格式)

JSON(JavaScriptObjectNotation)是一种轻量级的数据交换格式。易于人阅读和编写。同时也易于机器解析和生成。它基于JavaScriptProgrammingLa...

什么是 JSON:详解 JSON 及其优势(什么叫json)

现在程序员还有谁不知道JSON吗?无论对于前端还是后端,JSON都是一种常见的数据格式。那么JSON到底是什么呢?JSON的定义...

PostgreSQL JSON 类型:处理结构化数据

PostgreSQL提供JSON类型,以存储结构化数据。JSON是一种开放的数据格式,可用于存储各种类型的值。什么是JSON类型?JSON类型表示JSON(JavaScriptO...

JavaScript:JSON、三种包装类(javascript 包)

JOSN:我们希望可以将一个对象在不同的语言中进行传递,以达到通信的目的,最佳方式就是将一个对象转换为字符串的形式JSON(JavaScriptObjectNotation)-JS的对象表示法...

Python数据分析 只要1分钟 教你玩转JSON 全程干货

Json简介:Json,全名JavaScriptObjectNotation,JSON(JavaScriptObjectNotation(记号、标记))是一种轻量级的数据交换格式。它基于J...

比较一下JSON与XML两种数据格式?(json和xml哪个好)

JSON(JavaScriptObjectNotation)和XML(eXtensibleMarkupLanguage)是在日常开发中比较常用的两种数据格式,它们主要的作用就是用来进行数据的传...

取消回复欢迎 发表评论:

请填写验证码