MySQL5.7 组复制 MGR 原理及搭建实例


一、MGR介绍

主要功能:数据一致性,高可用,读写分离

MySQL Group Replication(简称MGR)是MySQL官方于2016年12月推出的一个全新的高可用与高扩展的解决方案。MySQL组复制提供了高可用、高扩展、高可靠的MySQL集群服务。

  • 高一致性,基于原生复制及paxos协议的组复制技术,并以插件的方式提供,提供一致数据安全保证;

  • 高容错性,只要不是大多数节点坏掉就可以继续工作,有自动检测机制,当不同节点产生资源争用冲突时,不会出现错误,按照先到者优先原则进行处理,并且内置了自动化脑裂防护机制;

  • 高扩展性,节点的新增和移除都是自动的,新节点加入后,会自动从其他节点上同步状态,直到新节点和其他节点保持一致,如果某节点被移除了,其他节点自动更新组信息,自动维护新的组信息;

MGR支持多主模式,但官方推荐单主模式:

  • 多主模式下,客户端可以随机向MySQL节点写入数据,所有server都可以同时处理更新操作。
  • 单主模式下,MGR集群会选出primary节点负责写请求,primary节点与其它节点都可以进行读请求处理.会自动选主,所有更新操作都在主上进行


二、MGR原理及搭建

  • 基于传统异步复制和半同步复制的缺陷——数据的一致性问题无法保证,MySQL官方在5.7.17版本正式推出组复制MGR。

  • 由若干个节点共同组成一个复制组,一个事务的提交,必须经过组内大多数节点(N / 2 + 1)决议并通过,才能得以提交。如上图所示,由3个节点组成一个复制组,Consensus层为一致性协议层,在事务提交过程中,发生组间通讯,由2个节点决议(certify)通过这个事务,事务才能够最终得以提交并响应。

  • 引入组复制,主要是为了解决传统异步复制和半同步复制可能产生数据不一致的问题。组复制依靠分布式一致性协议(Paxos协议的变体),实现了分布式下数据的最终一致性,提供了真正的数据高可用方案(是否真正高可用还有待商榷)。其提供的多写方案,给我们实现多活方案带来了希望。

1. 创建用户

useradd mysql

2. 上传Mysql5.7.20

tar -zxvf mysql-5.7.20-linux-glibc2.12-x86_64.tar.gz 
mv mysql-5.7.20-linux-glibc2.12-x86_64/  mysql

3. 环境变量

vi /root/.bash_profile
export PATH=$PATH:/usr/local/mysql/bin

mkdir -p /data/3306/data  /data/3307/data /data/3308/data
chown -R mysql.mysql /data /usr/local/mysql

4. 配置文件说明

配置示例:
++++++++++3306++++++
[mysqld]
user=mysql
datadir=/data/3306/data
basedir=/usr/local/mysql
port=3306
socket=/data/3306/mysql.sock
server_id=1
gtid_mode=ON
enforce_gtid_consistency=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
binlog_checksum=NONE
log_slave_updates=ON
log_bin=binlog
binlog_format=ROW
transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name="22d56f7c-dfe5-4eb1-a21a-cf9c27e8d625"
loose-group_replication_start_on_boot=off
loose-group_replication_local_address="192.168.29.128:33061"
loose-group_replication_group_seeds="192.168.29.128:33061,192.168.29.128:33062,192.168.29.128:33063"
loose-group_replication_bootstrap_group=off
loose-group_replication_single_primary_mode=FALSE
loose-group_replication_enforce_update_everywhere_checks= TRUE

++++++++++3307++++++
[mysqld]
user=mysql
datadir=/data/3307/data
basedir=/usr/local/mysql
port=3307
socket=/data/3307/mysql.sock
server_id=2
gtid_mode=ON
enforce_gtid_consistency=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
binlog_checksum=NONE
log_slave_updates=ON
log_bin=binlog
binlog_format=ROW
transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name="22d56f7c-dfe5-4eb1-a21a-cf9c27e8d625"
loose-group_replication_start_on_boot=off
loose-group_replication_local_address="192.168.29.128:33062"
loose-group_replication_group_seeds="192.168.29.128:33061,192.168.29.128:33062,192.168.29.128:33063"
loose-group_replication_bootstrap_group=off
loose-group_replication_single_primary_mode=FALSE
loose-group_replication_enforce_update_everywhere_checks= TRUE

++++++++++3308++++++
[mysqld]
user=mysql
datadir=/data/3308/data
basedir=/usr/local/mysql
port=3308
socket=/data/3308/mysql.sock
server_id=3
gtid_mode=ON
enforce_gtid_consistency=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
binlog_checksum=NONE
log_slave_updates=ON
log_bin=binlog
binlog_format=ROW
transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name="22d56f7c-dfe5-4eb1-a21a-cf9c27e8d625"
loose-group_replication_start_on_boot=off
loose-group_replication_local_address="192.168.29.128:33063"
loose-group_replication_group_seeds="192.168.29.128:33061,192.168.29.128:33062,192.168.29.128:33063"
loose-group_replication_bootstrap_group=off
loose-group_replication_single_primary_mode=FALSE
loose-group_replication_enforce_update_everywhere_checks= TRUE

========"组复制部分,配置文件介绍:"==========
group_replication变量使用的loose-前缀是指示Server启用时尚未加载复制插件也将继续启动
transaction_write_set_extraction = XXHASH64
# 指示Server必须为每个事务收集写集合,并使用XXHASH64哈希算法将其编码为散列
loose-group_replication_group_name="01e5fb97-be64-41f7-bafd-3afc7a6ab555"
# 表示将加入或者创建的复制组命名为01e5fb97-be64-41f7-bafd-3afc7a6ab555
# 可自定义(通过cat /proc/sys/kernel/random/uuid)
loose-group_replication_start_on_boot=off 
# 设置为Server启动时不自动启动组复制
loose-group_replication_local_address="192.168.29.128:33061" 
# 绑定本地的192.168.29.128及33061端口接受其他组成员的连接,IP地址必须为其他组成员可正常访问
loose-group_replication_group_seeds="192.168.29.128:33061,192.168.29.128:33062,192.168.29.128:33063"
# 本行为告诉服务器当服务器加入组时,应当连接到192.168.29.128:33061,192.168.29.128:33062,192.168.29.128:33063
# 这些种子服务器进行配置。本设置可以不是全部的组成员服务地址。
loose-group_replication_bootstrap_group = off 
# 配置是否自动引导组
loose-group_replication_ip_whitelist=”10.30.0.0/16,10.31.0..0/16,10.27.0.0/16″
# 配置白名单,默认情况下只允许192.168.29.128连接到复制组,如果是其他IP则需要配置。

5. 初始化数据,并启动数据库节点

/usr/local/mysql/bin/mysqld --initialize-insecure --user=mysql --basedir=/usr/local/mysql --datadir=/data/3306/data

/usr/local/mysql/bin/mysqld --initialize-insecure --user=mysql --basedir=/usr/local/mysql --datadir=/data/3307/data

/usr/local/mysql/bin/mysqld --initialize-insecure  --user=mysql --basedir=/usr/local/mysql --datadir=/data/3308/data

mysqld_safe --defaults-file=/data/3306/my.cnf &
mysqld_safe --defaults-file=/data/3307/my.cnf &
mysqld_safe --defaults-file=/data/3308/my.cnf &

6. 3306加入GR

登录mysql3306中创建复制用户
mysql -S /data/3306/mysql.sock
set sql_log_bin=0;
 grant replication slave,replication client on *.* to repl@'localhost' identified by '123';
 grant replication slave,replication client on *.* to repl@'127.0.0.1' identified by '123';
 grant replication slave,replication client on *.* to repl@'192.168.29.%' identified by '123';
  SET SQL_LOG_BIN=1;
 注:如果为三台独立节点,需要将localhost、127.0.0.1和远程主机域都授权用户
开启分布式复制

 change master to master_user='repl',master_password='123' for channel 'group_replication_recovery';

加载GR插件

install plugin group_replication soname 'group_replication.so';
show plugins;

启动复制程序
set global group_replication_bootstrap_group=ON;
start group_replication;
set global group_replication_bootstrap_group=OFF;
#检测组是否创建并已加入新成员
select * from performance_schema.replication_group_members;

7. 3307加入GR

创建复制用户
mysql -S /data/3307/mysql.sock
set sql_log_bin=0;
 grant replication slave,replication client on *.* to repl@'localhost' identified by '123';
 grant replication slave,replication client on *.* to repl@'127.0.0.1' identified by '123';
 grant replication slave,replication client on *.* to repl@'192.168.29.%' identified by '123';
 SET SQL_LOG_BIN=1;
 注:如果为三台独立节点,需要将localhost、127.0.0.1和远程主机域都授权用户
 开启分布式复制
 change master to master_user='repl',master_password='123' for channel 'group_replication_recovery';

加载GR插件
install plugin group_replication soname 'group_replication.so';
show plugins;

启动复制程序
start group_replication;

#检测组是否创建并已加入新成员
select * from performance_schema.replication_group_members;

注: 前面的用户密码修改和创建用户操作必须设置binlog不记录,执行后再打开,否则会引起START GROUP_REPLICATION执行报错:
ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.
解决方案是:根据提示打开group_replication_allow_local_disjoint_gtids_join选项,mysql命令行执行:
mysql> set global group_replication_allow_local_disjoint_gtids_join=ON;
然后再执行:
mysql> start group_replication;

8. 3308加入GR

创建复制用户
mysql -S /data/3308/mysql.sock
set sql_log_bin=0;
 grant replication slave,replication client on *.* to repl@'localhost' identified by '123';
 grant replication slave,replication client on *.* to repl@'127.0.0.1' identified by '123';
 grant replication slave,replication client on *.* to repl@'192.168.29.%' identified by '123';
 set sql_log_bin=1;

 注:如果为三台独立节点,需要将localhost、127.0.0.1和远程主机域都授权用户

 开启分布式复制
 change master to master_user='repl',master_password='123' for channel 'group_replication_recovery';

加载GR插件
install plugin group_replication soname 'group_replication.so';
show plugins;

启动复制程序
start group_replication;
#检测组是否创建并已加入新成员
select * from performance_schema.replication_group_members;

三、MGR的注意事项

1. 选择新主机制

1.当主节点宕掉,自动会根据服务器的server_uuid变量和group_replication_member_weight变量值,选择下一个slave谁作为主节点,group_replication_member_weight的值最高的成员被选为新的主节点.

2.在group_replication_member_weight值相同的情况下,group根据数据字典中 server_uuid排序,排序在最前的被选择为主节点.

3.调整权重后不能像mongo一样自动识别进行切换.只能主动触发故障。

2. 新主成员构成

1 一旦集群故障的节点超过阈值,整个集群变会被挂起,成为只读的状态,比如 3个节点,一旦挂掉2个 就会导致集群只读: 计算方式 2n+1=total, n为故障节点的阈值

2 单个节点的状态只有到ERROR时才会被认为是不可用,踢出集群,视图发生变更.

3. 视图成员状态说明

  ONLINE 表示该节点可正常提供服务
  RECOVERING 表示当前节点正在从其他节点恢复数据
  OFFLINE 表示GR插件已经加载,但是该节点不属于任何一个GR组
  ERROR 表示节点在recovery阶段或者从其他节点同步状态中出现错误
  UNREACHABLE表示节点处于不可达状态,无法与之发生网络通讯

4. 写集合

  • 主键在MGR中主键是有着极其重要的地位,是判断是否冲突的重要依据,所以规定表必须有主键
  • 写集合信息会封装进Transaction_context_log_event,同其他binlog event信息一起发送给其他节点

5. 限制和注意事项

 1 仅支持InnoDB表,并且每张表一定要有一个主键,用于做write set的冲突检测;
 2 目前一个MGR集群最多支持9个节点
 3 不支持外键于save point特性,无法做全局间的约束检测与部分部分回滚
 4 二进制日志不支持binlog event checksum
 5 对大事务的限制
 6 做数据迁移的时候,确保所有表都有主键,默认的复制表bak,一般是没有主键,请事先drop掉;
 7 必须开启binlog且为row格式;
 8 必须打开GTID特性,且主从状态信息存于表中;

四、常见报错

# 问题1:
[ERROR] Plugin group_replication reported: ‘This member has more
executed transactions than those present in the group. Local
transactions: bb874065-c485-11e8-8b52-000c2934472e:1 > Group
transactions: 3db33b36-0e51-409f-a61d-c99756e90155:1-11’ [ERROR]
Plugin group_replication reported: ‘The member contains transactions
not present in the group. The member will now exit the group.’ [Note]
Plugin group_replication reported: ‘To force this member into the
group you can use the
group_replication_allow_local_disjoint_gtids_join option’
此问题一般出现在非第一个节点,根据提示打开set global
group_replication_allow_local_disjoint_gtids_join=ON;

# 问题2
[ERROR] Plugin group_replication reported: ‘This member has more
executed transactions than those present in the group. Local
transactions: bb874065-c485-11e8-8b52-000c2934472e:1 > Group
transactions: 3db33b36-0e51-409f-a61d-c99756e90155:1-15’ [Warning]
Plugin group_replication reported: ‘The member contains transactions
not present in the group. It is only allowed to join due to
group_replication_allow_local_disjoint_gtids_join option’ [Note]
Plugin group_replication reported: ‘This server is working as
secondary member with primary member address
localhost.localdomaion:3306.’
该故障和故障1的不同之处在于该问题出现时,参数group_replication_allow_local_disjoint_gtids_join已经设置成为on了。解决该问题的方法是执行reset master就行,然后重新在主节点和从节点开启通道,即

CHANGE MASTER TO MASTER_USER=‘MGR’, MASTER_PASSWORD=‘abcedf!@#’ FOR
CHANNEL ‘group_replication_recovery’;

rest master 主要是为了解决数据不同步问题。reset master
需要首先停应用,保证应用不更新数据,然后让落后于复制集数据的节点根据binlog重新更新数据。使用前一定要明确使用的节点,一般是异常节点和,新加入节点,在线节点谨慎使用。

# 问题3
[Warning] Storing MySQL user name or password information in the
master info repository is not secure and is therefore not recommended.
Please consider using the USER and PASSWORD connection options for
START SLAVE; see the ‘START SLAVE Syntax’ in the MySQL Manual for more
information. [ERROR] Slave I/O for channel
‘group_replication_recovery’: Master command COM_REGISTER_SLAVE
failed: Access denied for user ‘MGR’@‘localhost’ (using password: YES)
(Errno: 1045), Error_code: 1597 [ERROR] Slave I/O thread couldn’t
register on master [Note] Slave I/O thread exiting for channel
‘group_replication_recovery’, read up to log ‘FIRST’, position
授权问题,参考 重点说明(3)解决

grant replication slave on . to MGR@‘192.168.1.%’ identified by
‘abc!@#’; grant replication slave on . to MGR@‘localhost’
identified by ‘abc!@#’; grant replication slave on . to
MGR@‘127.0.0.1’ identified by ‘abc!@#’;

# 问题4
ERROR 1872 (HY000): Slave failed to initialize relay log info
structure from the repository 解决防范,执行reset slave

reset slave干的那些事:

删除slave_master_info ,slave_relay_log_info两个表中数据; 删除所有relay
log文件,并重新创建新的relay log文件; 不会改变gtid_executed 或者 gtid_purged的值

五、MGR节点宕机故障恢复

  • MGR在某个节点宕机之后会自动开启组复制

1.环境搭建

主机名 IP 系统 数据库版本
db01 192.168.179.102 CentOS7.4 5.7.30
db02 192.168.179.103 CentOS7.4 5.7.30
db03 192.168.179.104 CentOS7.4 5.7.30

2.集群中节点宕机(db02,db03宕机)

主节点db01上查看,可以看到db01,db03状态是不存在的,在db01节点中可以看到成员被移除了。

在db02,db03上面可以看到状态是error的

3.故障处理步骤

1.如果是单主模式,某个节点宕机,如果该节点是可读可写节点,那么可写权限会漂移到其他节点,如果该节点只是个可读节点,那么宕机也不影响集群运行;
多主模式下某个节点宕机更不会影响MGR集群的运行;
所以某个节点宕机或者多个节点宕机,但集群中还存在可读可写的节点,那么重启节点实例后,直接开启MGR加入复制集群即可。

2.我们这是单主集群,由于主节点存在可读可写,所以只需要将从节点先stop group_replication;然后再start group_replication;即可

3.如果你先不关闭组复制,会报出如下错误:
mysql> start group_replication;
ERROR 3093 (HY000): The START GROUP_REPLICATION command failed since the group is already running.

4.由于db02,db03在尚未正确加入集群,在db01主库插入一条数据,看看是否在db02,db03在之后加入集群之后数据可以同步:
mysql> insert into tbs02 values('3','test');
Query OK, 1 row affected (0.01 sec)

5.在db02上操作如下:
mysql> stop group_replication;
Query OK, 0 rows affected (6.03 sec)
mysql> start group_replication;
Query OK, 0 rows affected, 1 warning (5.05 sec)

# 主库日志信息,可以看到db02已经成功加入到集群里面了
========================================
2020-05-31T02:23:00.068064Z 0 [Note] Plugin group_replication reported: 'Members joined the group: db02:3306'
2020-05-31T02:23:00.068315Z 0 [Note] Plugin group_replication reported: 'Group membership changed to db01:3306, db02:3306 on view 15907170974285400:6.'
2020-05-31T02:23:00.122246Z 30 [Note] Start binlog_dump to master_thread_id(30) slave_server(102), pos(, 4)
2020-05-31T02:23:00.179611Z 0 [Note] Plugin group_replication reported: 'The member with address db02:3306 was declared online within the replication group'
2020-05-31T02:24:00.123889Z 30 [Note] Aborted connection 30 to db: 'unconnected' user: 'repl' host: 'db02' (failed on flush_net())


# 查看集群状态,可以看到db02加入了
======================================
mysql> melect * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 18f5da07-a096-11ea-8c70-000c290e1abf | db01        |        3306 | ONLINE       |
| group_replication_applier | dfe895a8-a0df-11ea-bb7d-000c296190c1 | db02        |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+

db03如法炮制参照db02:

mysql> stop group_replication;
Query OK, 0 rows affected (6.03 sec)
mysql> start group_replication;
Query OK, 0 rows affected, 1 warning (3.07 sec)

#db01日志
2020-05-31T02:23:00.068064Z 0 [Note] Plugin group_replication reported: 'Members joined the group: db02:3306'
2020-05-31T02:23:00.068315Z 0 [Note] Plugin group_replication reported: 'Group membership changed to db01:3306, db02:3306 on view 15907170974285400:6.'
2020-05-31T02:23:00.122246Z 30 [Note] Start binlog_dump to master_thread_id(30) slave_server(102), pos(, 4)
2020-05-31T02:23:00.179611Z 0 [Note] Plugin group_replication reported: 'The member with address db02:3306 was declared online within the replication group'
2020-05-31T02:24:00.123889Z 30 [Note] Aborted connection 30 to db: 'unconnected' user: 'repl' host: 'db02' (failed on flush_net())
2020-05-31T02:39:28.051102Z 0 [Note] Plugin group_replication reported: 'Members joined the group: db03:3306'
2020-05-31T02:39:28.051830Z 0 [Note] Plugin group_replication reported: 'Group membership changed to db01:3306, db03:3306, db02:3306 on view 15907170974285400:7.'
2020-05-31T02:39:28.274123Z 0 [Note] Plugin group_replication reported: 'The member with address db03:3306 was declared online within the replication group'

#db02日志
2020-05-31T02:39:28.051213Z 0 [Note] Plugin group_replication reported: 'Members joined the group: db03:3306'
2020-05-31T02:39:28.051941Z 0 [Note] Plugin group_replication reported: 'Group membership changed to db01:3306, db03:3306, db02:3306 on view 15907170974285400:7.'
2020-05-31T02:39:28.157716Z 47 [Note] Start binlog_dump to master_thread_id(47) slave_server(103), pos(, 4)
2020-05-31T02:39:28.274040Z 0 [Note] Plugin group_replication reported: 'The member with address db03:3306 was declared online within the replication group'


#通过上面日志可以看到在加入db03节点时候,db01,db02都可以得知该节点的加入,再去看看全部都online了
mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 18f5da07-a096-11ea-8c70-000c290e1abf | db01        |        3306 | ONLINE       |
| group_replication_applier | 7d3e894a-a0a9-11ea-82f8-000c29a7fff7 | db03        |        3306 | ONLINE       |
| group_replication_applier | dfe895a8-a0df-11ea-bb7d-000c296190c1 | db02        |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
3 rows in set (0.01 sec)

4.验证数据

"最后验证在正确加入集群db02,db03上是否可以查询到db01在db02,db03尚未正确加入集群插入的数据"

# db02上可以查询到
mysql> select * from tbs02;
+----+------+
| id | name |
+----+------+
|  1 | fxkt |
|  3 | test |
+----+------+
2 rows in set (0.00 sec)

# db03上同样可以查询到
mysql> select * from tbs02;
+----+------+
| id | name |
+----+------+
|  1 | fxkt |
|  3 | test |
+----+------+
2 rows in set (0.00 sec)

# 结论,在db02,db03宕机重启重新加入集群会将主库db01上修改的数据进行同步
Copyright © 2009 - Now . XPBag.com . All rights Reserved.
夜心的小站 » MySQL5.7 组复制 MGR 原理及搭建实例

提供最优质的资源集合

立即查看 了解详情