Mongodb分片Sharding Cluster及高可用

一、Mongodb的分片(MSC)

1.分片的概念

mongodb的副本集跟redis的高可用相同,只能读,分担不了主库的压力,只能在主库出现故障的时候接替主库的工作
mongodb能够使用的内存,只是主库的内存和磁盘,当副本集中机器配置不一致时也会有问题

2.分片的介绍

#优点:
    1.提高机器资源的利用率
    2.减轻主库的压力
#缺点:
    1.机器需要的更多
    2.配置和管理更加的复杂和困难
    3.分片配置好之后想修改很困难

3.分片的原理

1)路由服务 mongos server

类似于代理,跟数据库的atlas类似,可以将客户端的数据分配到后端的mongo服务器上

2)分片配置服务器信息 config server

mongos server是不知道后端服务器mongo有几台,地址是什么,他只能连接到这个config server,而config server就是记录后端服务器地址和数据的一个服务
作用:
    1.记录后端mongo节点的信息
    2.记录数据写入存到了哪个节点
    3.提供给mongos后端服务器的信息

3)片键

config server只存储信息,而不会主动将数据写入节点,所以还有一个片键的概念,片键就是索引
作用:
    1.将数据根据规则分配到不同的节点
    2.相当于建立索引,加快访问速度
分类:
    1.区间片键(很有可能出现数据分配不均匀的情况)
        可以以时间区间分片,根据时间建立索引
        可以以地区区间分片,根据地区建立索引
    2.hash片键(足够平均,足够随机)
        根据id或者数据数量进行分配

4)分片

存储数据的节点,这种方式就是分布式集群

二、分片集群的搭建

做分片只是针对单节点,mongo服务相当于还是只有一个,所以我们还有对分片进行副本集的操作

跟ES一样,我们不能一台机器上部署多节点,自己做自己的副本,那当机器挂了时,还是有问题

所以我们要错开进行副本集的建立,而且一台机器上不能有相同的数据节点,否则选举又会出现问题

1.服务器规划15台

主机 ip 部署 端口
mongodb01 10.0.0.81 Shard1_Master Shard2_Slave Shard3_Arbiter Config Server Mongos Server 20010 28020 28030 40000 60000
mongodb02 10.0.0.82 Shard2_Master Shard3_Slave Shard1_Arbiter Config Server Mongos Server 20010 28020 28030 40000 60000
mongodb03 10.0.0.83 Shard3_Master Shard1_Slave Shard2_Arbiter Config Server Mongos Server 20010 28020 28030 40000 60000

2.目录规划

#服务目录
mkdir /server/mongodb/master/{conf,log,pid,data} -p
mkdir /server/mongodb/slave/{conf,log,pid,data} -p
mkdir /server/mongodb/arbiter/{conf,log,pid,data} -p
mkdir /server/mongodb/config/{conf,log,pid,data} -p
mkdir /server/mongodb/mongos/{conf,log,pid} -p

3.安装mongo

#安装依赖
yum install -y libcurl openssl
#上传或下载包
rz mongodb-linux-x86_64-3.6.13.tgz
#解压
tar xf mongodb-linux-x86_64-3.6.13.tgz -C /usr/local/
#做软连接
ln -s /usr/local/mongodb-linux-x86_64-3.6.13 /usr/local/mongodb

4.配置mongodb01

1)配置shard上的master

vim /server/mongodb/master/conf/mongo.conf

systemLog:
  destination: file 
  logAppend: true 
  path: /server/mongodb/master/log/master.log

storage:
  journal:
    enabled: true
  dbPath: /server/mongodb/master/data/
  directoryPerDB: true

  wiredTiger:
    engineConfig:
      cacheSizeGB: 1
      directoryForIndexes: true
    collectionConfig:
      blockCompressor: zlib
    indexConfig:
      prefixCompression: true

processManagement:
  fork: true
  pidFilePath: /server/mongodb/master/pid/master.pid
  timeZoneInfo: /usr/share/zoneinfo

net:
  port: 28010
  bindIp: 127.0.0.1,10.0.0.81

replication:
  oplogSizeMB: 1024 
  replSetName: shard1

sharding:
  clusterRole: shardsvr   #分片规则:分片集群服务

2)配置shard上的slave

vim /server/mongodb/slave/conf/mongo.conf

systemLog:
  destination: file 
  logAppend: true 
  path: /server/mongodb/slave/log/slave.log

storage:
  journal:
    enabled: true
  dbPath: /server/mongodb/slave/data/
  directoryPerDB: true

  wiredTiger:
    engineConfig:
      cacheSizeGB: 1
      directoryForIndexes: true
    collectionConfig:
      blockCompressor: zlib
    indexConfig:
      prefixCompression: true

processManagement:
  fork: true
  pidFilePath: /server/mongodb/slave/pid/slave.pid
  timeZoneInfo: /usr/share/zoneinfo

net:
  port: 28020
  bindIp: 127.0.0.1,10.0.0.81

replication:
  oplogSizeMB: 1024
  replSetName: shard2

sharding:
  clusterRole: shardsvr

3)配置shard上的arbiter

vim /server/mongodb/arbiter/conf/mongo.conf

systemLog:
  destination: file 
  logAppend: true 
  path: /server/mongodb/arbiter/log/arbiter.log

storage:
  journal:
    enabled: true
  dbPath: /server/mongodb/arbiter/data/
  directoryPerDB: true

  wiredTiger:
    engineConfig:
      cacheSizeGB: 1
      directoryForIndexes: true
    collectionConfig:
      blockCompressor: zlib
    indexConfig:
      prefixCompression: true

processManagement:
  fork: true
  pidFilePath: /server/mongodb/arbiter/pid/arbiter.pid
  timeZoneInfo: /usr/share/zoneinfo

net:
  port: 28030
  bindIp: 127.0.0.1,10.0.0.81

replication:
  oplogSizeMB: 1024
  replSetName: shard3

sharding:
  clusterRole: shardsvr

5.配置mongodb02

1)配置shard上的master

vim /server/mongodb/master/conf/mongo.conf

systemLog:
  destination: file 
  logAppend: true 
  path: /server/mongodb/master/log/master.log

storage:
  journal:
    enabled: true
  dbPath: /server/mongodb/master/data/
  directoryPerDB: true

  wiredTiger:
    engineConfig:
      cacheSizeGB: 1
      directoryForIndexes: true
    collectionConfig:
      blockCompressor: zlib
    indexConfig:
      prefixCompression: true

processManagement:
  fork: true
  pidFilePath: /server/mongodb/master/pid/master.pid
  timeZoneInfo: /usr/share/zoneinfo

net:
  port: 28010
  bindIp: 127.0.0.1,10.0.0.82

replication:
  oplogSizeMB: 1024 
  replSetName: shard2

sharding:
  clusterRole: shardsvr

2)配置shard上的slave

vim /server/mongodb/slave/conf/mongo.conf

systemLog:
  destination: file 
  logAppend: true 
  path: /server/mongodb/slave/log/slave.log

storage:
  journal:
    enabled: true
  dbPath: /server/mongodb/slave/data/
  directoryPerDB: true

  wiredTiger:
    engineConfig:
      cacheSizeGB: 1
      directoryForIndexes: true
    collectionConfig:
      blockCompressor: zlib
    indexConfig:
      prefixCompression: true

processManagement:
  fork: true
  pidFilePath: /server/mongodb/slave/pid/slave.pid
  timeZoneInfo: /usr/share/zoneinfo

net:
  port: 28020
  bindIp: 127.0.0.1,10.0.0.82

replication:
  oplogSizeMB: 1024
  replSetName: shard3

sharding:
  clusterRole: shardsvr

3)配置shard上的arbiter

vim /server/mongodb/arbiter/conf/mongo.conf

systemLog:
  destination: file 
  logAppend: true 
  path: /server/mongodb/arbiter/log/arbiter.log

storage:
  journal:
    enabled: true
  dbPath: /server/mongodb/arbiter/data/
  directoryPerDB: true

  wiredTiger:
    engineConfig:
      cacheSizeGB: 1
      directoryForIndexes: true
    collectionConfig:
      blockCompressor: zlib
    indexConfig:
      prefixCompression: true

processManagement:
  fork: true
  pidFilePath: /server/mongodb/arbiter/pid/arbiter.pid
  timeZoneInfo: /usr/share/zoneinfo

net:
  port: 28030
  bindIp: 127.0.0.1,10.0.0.82

replication:
  oplogSizeMB: 1024
  replSetName: shard1

sharding:
  clusterRole: shardsvr

6.配置mongodb03

1)配置shard上的master

vim /server/mongodb/master/conf/mongo.conf

systemLog:
  destination: file 
  logAppend: true 
  path: /server/mongodb/master/log/master.log

storage:
  journal:
    enabled: true
  dbPath: /server/mongodb/master/data/
  directoryPerDB: true

  wiredTiger:
    engineConfig:
      cacheSizeGB: 1
      directoryForIndexes: true
    collectionConfig:
      blockCompressor: zlib
    indexConfig:
      prefixCompression: true

processManagement:
  fork: true
  pidFilePath: /server/mongodb/master/pid/master.pid
  timeZoneInfo: /usr/share/zoneinfo

net:
  port: 28010
  bindIp: 127.0.0.1,10.0.0.83

replication:
  oplogSizeMB: 1024 
  replSetName: shard3

sharding:
  clusterRole: shardsvr

2)配置shard上的slave

vim /server/mongodb/slave/conf/mongo.conf

systemLog:
  destination: file 
  logAppend: true 
  path: /server/mongodb/slave/log/slave.log

storage:
  journal:
    enabled: true
  dbPath: /server/mongodb/slave/data/
  directoryPerDB: true

  wiredTiger:
    engineConfig:
      cacheSizeGB: 1
      directoryForIndexes: true
    collectionConfig:
      blockCompressor: zlib
    indexConfig:
      prefixCompression: true

processManagement:
  fork: true
  pidFilePath: /server/mongodb/slave/pid/slave.pid
  timeZoneInfo: /usr/share/zoneinfo

net:
  port: 28020
  bindIp: 127.0.0.1,10.0.0.83

replication:
  oplogSizeMB: 1024
  replSetName: shard1

sharding:
  clusterRole: shardsvr

3)配置shard上的arbiter

vim /server/mongodb/arbiter/conf/mongo.conf

systemLog:
  destination: file 
  logAppend: true 
  path: /server/mongodb/arbiter/log/arbiter.log

storage:
  journal:
    enabled: true
  dbPath: /server/mongodb/arbiter/data/
  directoryPerDB: true

  wiredTiger:
    engineConfig:
      cacheSizeGB: 1
      directoryForIndexes: true
    collectionConfig:
      blockCompressor: zlib
    indexConfig:
      prefixCompression: true

processManagement:
  fork: true
  pidFilePath: /server/mongodb/arbiter/pid/arbiter.pid
  timeZoneInfo: /usr/share/zoneinfo

net:
  port: 28030
  bindIp: 127.0.0.1,10.0.0.83

replication:
  oplogSizeMB: 1024
  replSetName: shard2

sharding:
  clusterRole: shardsvr

7.配置环境变量

[root@redis01 ~]# vim /etc/profile.d/mongo.sh
export PATH="/usr/local/mongodb/bin:$PATH"

[root@redis01 ~]# source /etc/profile

8.优化警告

useradd mongo -s /sbin/nologin -M 
echo "never"  > /sys/kernel/mm/transparent_hugepage/enabled
echo "never"  > /sys/kernel/mm/transparent_hugepage/defrag

9.配置system管理

1)配置master管理

vim /usr/lib/systemd/system/mongod-master.service

[Unit]
Description=MongoDB Database Server
Documentation=https://docs.mongodb.org/manual
After=network.target

[Service]
User=mongo
Group=mongo
ExecStart=/usr/local/mongodb/bin/mongod -f /server/mongodb/master/conf/mongo.conf
ExecStartPre=/usr/bin/chown -R mongo:mongo /server/mongodb/master/
ExecStop=/usr/local/mongodb/bin/mongod -f /server/mongodb/master/conf/mongo.conf --shutdown
PermissionsStartOnly=true
PIDFile=/server/mongodb/master/pid/master.pid
Type=forking

[Install]
WantedBy=multi-user.target

2)配置管理salve

vim /usr/lib/systemd/system/mongod-slave.service

[Unit]
Description=MongoDB Database Server
Documentation=https://docs.mongodb.org/manual
After=network.target

[Service]
User=mongo
Group=mongo
ExecStart=/usr/local/mongodb/bin/mongod -f /server/mongodb/slave/conf/mongo.conf
ExecStartPre=/usr/bin/chown -R mongo:mongo /server/mongodb/slave/
ExecStop=/usr/local/mongodb/bin/mongod -f /server/mongodb/slave/conf/mongo.conf --shutdown
PermissionsStartOnly=true
PIDFile=/server/mongodb/slave/pid/slave.pid
Type=forking

[Install]
WantedBy=multi-user.target

3)配置管理arbiter

vim /usr/lib/systemd/system/mongod-arbiter.service

[Unit]
Description=MongoDB Database Server
Documentation=https://docs.mongodb.org/manual
After=network.target

[Service]
User=mongo
Group=mongo
ExecStart=/usr/local/mongodb/bin/mongod -f /server/mongodb/arbiter/conf/mongo.conf
ExecStartPre=/usr/bin/chown -R mongo:mongo /server/mongodb/arbiter/
ExecStop=/usr/local/mongodb/bin/mongod -f /server/mongodb/arbiter/conf/mongo.conf --shutdown
PermissionsStartOnly=true
PIDFile=/server/mongodb/arbiter/pid/arbiter.pid
Type=forking

[Install]
WantedBy=multi-user.target

4)刷新启动程序

systemctl daemon-reload

10.启动mongodb所有节点

systemctl start mongod-master.service
systemctl start mongod-slave.service
systemctl start mongod-arbiter.service

11.配置副本集(交错方式)

1)mongodb01初始化副本集shard

#连接主库
mongo --port 28010
rs.add("10.0.0.83:28020")
rs.addArb("10.0.0.82:28030")

或者使用:
config = {_id: 'sh1', members: [
    {_id: 0, host: '10.0.0.81:28010'},
    {_id: 1, host: '10.0.0.83:28020'},
    {_id: 2, host: '10.0.0.82:28030',"arbiterOnly":true}]
           }

2)mongodb02初始化副本集shard

#连接主库
mongo --port 28010
rs.add("10.0.0.81:28020")
rs.addArb("10.0.0.83:28030")

3)mongodb03初始化副本集shard

#连接主库
mongo --port 28010
rs.add("10.0.0.82:28020")
rs.addArb("10.0.0.81:28030")

4)检查所有节点副本集状态

#三台主节点
mongo --port 28010
rs.status()
rs.isMaster()

12.配置config server

1)创建目录

2)配置config server

vim /server/mongodb/config/conf/mongo.conf

systemLog:
  destination: file 
  logAppend: true 
  path: /server/mongodb/config/log/mongodb.log

storage:
  journal:
    enabled: true
  dbPath: /server/mongodb/config/data/
  directoryPerDB: true

  wiredTiger:
    engineConfig:
      cacheSizeGB: 1
      directoryForIndexes: true
    collectionConfig:
      blockCompressor: zlib
    indexConfig:
      prefixCompression: true

processManagement:
  fork: true
  pidFilePath: /server/mongodb/config/pid/mongod.pid
  timeZoneInfo: /usr/share/zoneinfo

net:
  port: 40000
  bindIp: 127.0.0.1,10.0.0.81

replication:
  replSetName: configset

sharding:
  clusterRole: configsvr

3)启动

/usr/local/mongodb/bin/mongod -f /server/mongodb/config/conf/mongo.conf

4)mongodb01上初始化副本集config server

mongo --port 40000

rs.initiate({
  _id:"configset", 
  1configsvr: true,
  members:[
    {_id:0,host:"10.0.0.51:40000"},
    {_id:1,host:"10.0.0.52:40000"},
    {_id:2,host:"10.0.0.53:40000"},
  ]})

5)检查

rs.status()
rs.isMaster()

13.配置mongos

1)创建目录

2)配置mongos

vim /server/mongodb/mongos/conf/mongo.conf
=======================
systemLog:
  destination: file 
  logAppend: true 
  path: /server/mongodb/mongos/log/mongos.log

processManagement:
  fork: true
  pidFilePath: /server/mongodb/mongos/pid/mongos.pid
  timeZoneInfo: /usr/share/zoneinfo

net:
  port: 60000
  bindIp: 127.0.0.1,10.0.0.81

sharding:
  configDB: 
configReplset/10.0.0.81:40000,10.0.0.82:40000,10.0.0.83:40000
=======================
#有的低版本中configReplset替换使用configSet

3)启动

/usr/local/mongodb/bin/mongod -f /server/mongodb/mongos/conf/mongo.conf

4)添加分片成员

#登录mongos
mongo --port 60000

#添加成员
use admin
db.runCommand({addShard:'shard1/10.0.0.81:28100,10.0.0.83:28200,10.0.0.82:28300'})
db.runCommand({addShard:'shard2/10.0.0.82:28100,10.0.0.81:28200,10.0.0.83:28300'})
db.runCommand({addShard:'shard3/10.0.0.83:28100,10.0.0.82:28200,10.0.0.81:28300'})

5)查看分片信息

db.runCommand( { listshards : 1 } )

或者
sh.status()

"在db02和db03上也重复同样步骤即可."
  • 以上即完成了集群的搭建,已经可以使用,但分片是使用自动分片的功能,会多做一步chunk迁移,影响io性能,所以使用以下分片的进一步优化步骤.

三、手动配置区间分片策略

  • range方式分片
  • hash方式分片(推荐,更快更均衡)

1. range方式分片

1)开启区间分片功能

需要在mongos中开启
#数据库开启分片
mongo --port 60000
use admin 

#指定库开启分片
db.runCommand( { enablesharding : "test" } )

2)创建集合索引

mongo --port 60000 
use test
db.range.ensureIndex( { id: 1 } )
#在range表上创建id列索引顺序排序.

3)对集合开启分片,片键是id

use admin
db.runCommand( { shardcollection : "test.range",key : {id: 1} } )

4)插入测试数据

use test
for(i=1;i<10000;i++){ db.range.insert({"id":i,"name":"shanghai","age":28,"date":new Date()}); }
db.range.stats()
db.range.count()

2. hash方式分片

#数据库开启分片
mongo --port 60000
use admin
db.runCommand( { enablesharding : "testhash" } )

1)集合创建索引

use testhash
db.hash.ensureIndex( { id: "hashed" } )
#在testhash库的hash表下建立索引列id,类型为hashed(哈希)

2)集合开启哈希分片

use admin
sh.shardCollection( "testhash.hash", { id: "hashed" } )

3)生成测试数据

use testhash
for(i=1;i<10000;i++){ db.hash.insert({"id":i,"name":"shanghai","age":70}); }

4)验证数据

分片验证
#mongodb01
mongo --port 28010
use testhash
db.hash.count()
33755

#mongodb01
mongo --port 28010
use testhash
db.hash.count()
33142

#mongodb01
mongo --port 28010
use testhash
db.hash.count()
33102

四、分片集群常用管理命令

## 可以查看以下所有信息
   sh.status()

# 1.列出分片所有详细信息
    db.printShardingStatus()
    sh.status()

# 2.列出所有分片成员信息
    use admin
    db.runCommand({ listshards : 1})

# 3.列出开启分片的数据库
    use config
    db.databases.find({"partitioned": true })

# 4.查看分片的片键
    use config
    db.collections.find().pretty()

# 5. 其他命令
删除分片节点(谨慎)
(1)确认blance是否在工作
sh.getBalancerState()
(2)删除shard2节点(谨慎)
mongos> db.runCommand( { removeShard: "shard2" } )
#注意:删除操作一定会立即触发blancer然后迁移数据。最好在停机状态下操作,速度很慢,会影响很大。


五、balancer操作管理

1. balancer介绍

# mongos的一个重要功能,自动巡查所有shard节点上的chunk的情况,自动做chunk迁移。
什么时候工作?
1、自动运行,会检测系统不繁忙的时候做迁移
2、在做节点删除的时候,立即开始迁移工作
3、balancer只能在预设定的时间窗口内运行

# 有需要时可以关闭和开启blancer(备份的时候)
mongos> sh.stopBalancer()
mongos> sh.startBalancer()

2. 自定义balancer进行的时间段

#官方说明文档:
https://docs.mongodb.com/manual/tutorial/manage-sharded-cluster-balancer/#schedule-the-balancing-window

========================
#!生产中必须要调整的配置.(重要)
use config
sh.setBalancerState( true )
db.settings.update({ _id : "balancer" }, { $set : { activeWindow : { start : "3:00", stop : "5:00" } } }, true )
#设定balancer开始和结束的时间段
sh.getBalancerWindow()
sh.status()
========================

#关于集合的balancer(了解):
1.关闭某个集合的balance
sh.disableBalancing("students.grades")

2.打开某个集合的balancer
sh.enableBalancing("students.grades")

3.确定某个集合的balance是开启或者关闭
db.getSiblingDB("config").collections.findOne({_id : "students.grades"}).noBalance;

六、mongo配置密码做副本集

openssl rand -base64 123 > /server/mongodb/mongo.key
chown -R mongod.mongod /server/mongodb/mongo.key
chmod -R 600 /server/mongodb/mongo.key

scp -r /server/mongodb/mongo.key 192.168.1.82:/server/mongodb/

scp -r /server/mongodb/mongo.key 192.168.1.83:/server/mongodb/
Copyright © 2009 - Now . XPBag.com . All rights Reserved.
夜心的小站 » Mongodb分片Sharding Cluster及高可用

提供最优质的资源集合

立即查看 了解详情