mongodb的分片功能是建立在副本集之上的,所以首先我们尝试着配置副本集。
docker启动3个已经安装好mongo的镜像
- # docker run -idt --name mongodb_01 mongodb_master:v2 /bin/bash
- # docker run -idt --name mongodb_02 mongodb_master:v2 /bin/bash
- # docker run -idt --name mongodb_03 mongodb_master:v2 /bin/bash
查看容器ip
- # docker inspect mongodb_01 | grep IP
3个容器的ip为
172.17.0.4,172.17.0.5,172.17.0.6
进入容器,分别建立mongodb的数据和日志文件夹,编辑配置文件
- # docker exec -it mongodb_01 /bin/ -p /opt/mongodb/rs0/ -p /opt/mongodb/rs0/log
- # vi /usr/local/mongdb/conf/rs0.conf
dbpath=/opt/mongodb/rs0/data #数据目录
logpath=/opt/mongodb/rs0/log/rs0.log #日志目录
pidfilepath=/opt/mongodb/rs0/log/rs0.pid #pid
logappend=true
replSet=rs0 #副本集名称
bind_ip=172.17.0.4 #容器对应的ip
port=27617
fork=true
maxConns=2000
//启动容器
# mongod -f /usr/local/mongdb/conf/rs0.conf
3个容器的mongodb全部启动后,随便连接一个mongodb
- # mongo --host 172.17.0.4 --port 27617> rs.initiate() //初始化副本集> rs.conf()//确认更改> rs.add({host:"172.17.0.5:27518", priority: 6}) //将另外两个mongo服务加入副本集> rs.conf() //确认更改> rs.status() //查看副本及状态
- priority代表副本集的优先级,数值越大优先级越高,可以通过rs.status()查看当前副本集的状态,stateStr表示副本及的身份,可以看到172.17.0.5目前的身份是PRIMARY
- ,另外两个都是SECONDARY,这时我们停止172.17.0.5的mongo服务,过一段时间再看,0.4和0.6中其中一台节点就会变成PRIMARY
- ,再开启0.5上的mongo服务,又会回复成原来的状态
- "members" : [
- {"_id" : 0,"name" : "172.17.0.6:27619","health" : 1,"state" : 2,"stateStr" : "SECONDARY","uptime" : 264637,"optime" : {"ts" : Timestamp(1539406655, 1),"t" : NumberLong(4)
- },"optimeDate" : ISODate("2018-10-13T04:57:35Z"),"syncingTo" : "172.17.0.4:27617","syncSourceHost" : "172.17.0.4:27617","syncSourceId" : 2,"infoMessage" : "","configVersion" : 3,"self" : true,"lastHeartbeatMessage" : ""},
- {"_id" : 1,"name" : "172.17.0.5:27618","health" : 1,"state" : 1,"stateStr" : "PRIMARY","uptime" : 263943,"optime" : {"ts" : Timestamp(1539406655, 1),"t" : NumberLong(4)
- },"optimeDurable" : {"ts" : Timestamp(1539406655, 1),"t" : NumberLong(4)
- },"optimeDate" : ISODate("2018-10-13T04:57:35Z"),"optimeDurableDate" : ISODate("2018-10-13T04:57:35Z"),"lastHeartbeat" : ISODate("2018-10-13T04:57:38.894Z"),"lastHeartbeatRecv" : ISODate("2018-10-13T04:57:38.892Z"),"pingMs" : NumberLong(0),"lastHeartbeatMessage" : "","syncingTo" : "","syncSourceHost" : "","syncSourceId" : -1,"infoMessage" : "","electionTime" : Timestamp(1539142727, 1),"electionDate" : ISODate("2018-10-10T03:38:47Z"),"configVersion" : 3},
- {"_id" : 2,"name" : "172.17.0.4:27617","health" : 1,"state" : 2,"stateStr" : "SECONDARY","uptime" : 264390,"optime" : {"ts" : Timestamp(1539406655, 1),"t" : NumberLong(4)
- },"optimeDurable" : {"ts" : Timestamp(1539406655, 1),"t" : NumberLong(4)
- },"optimeDate" : ISODate("2018-10-13T04:57:35Z"),"optimeDurableDate" : ISODate("2018-10-13T04:57:35Z"),"lastHeartbeat" : ISODate("2018-10-13T04:57:38.893Z"),"lastHeartbeatRecv" : ISODate("2018-10-13T04:57:38.893Z"),"pingMs" : NumberLong(0),"lastHeartbeatMessage" : "","syncingTo" : "172.17.0.5:27618","syncSourceHost" : "172.17.0.5:27618","syncSourceId" : 1,"infoMessage" : "","configVersion" : 3}
- ],
了解了副本集的配置,接下来进行分片的配置
分片即是通过某种算法,将数据分散到不同的片区上,但这样同时会产生一个问题,如果某一个片区出现问题则整个数据都会变的不可用。
所以,需要将分散到不同片区上的数据再以副本集的形式存放,这样在分片的同时也就具备了容错的能力,概念上其实和RAID很相似。
mongodb的分片配置包含以下几个角色:
Config Server:配置分片信息
Shard:实际存储分片数据的地方
mongos:分片的路由,也是前端实际连接的实例
这里盗一张网上的图:

依然用之前的3个镜像,建立需要的文件夹
conf对应Config Server的文件夹
mongos 对应 mongos的文件夹
shard这里我们分为3片,分别对应 shard1,shard2,shard3
- # mkdir -p /opt/mongodb/conf/data
- # mkdir -p /opt/mongodb/conf/log
- # mkdir -p /opt/mongodb/mongos/data
- # mkdir -p /opt/mongodb/mongos/log
- # mkdir -p /opt/mongodb/shard1/data
- # mkdir -p /opt/mongodb/shard1/log
- # mkdir -p /opt/mongodb/shard2/data
- # mkdir -p /opt/mongodb/shard2/log
- # mkdir -p /opt/mongodb/shard3/data
- # mkdir -p /opt/mongodb/shard3/log
编辑每个角色的配置文件
Config Server
- # vi /usr/local/mongdb/conf/conf.conf
- dbpath=/opt/mongodb/conf/data
- logpath=/opt/mongodb/conf/log/conf.log
- pidfilepath=/opt/mongodb/conf/log/conf.pid
- logappend=truereplSet=configs
- bind_ip=172.17.0.6port=27019fork=truemaxConns=2000configsvr=true #Config Server服务器增加此行
mongos
- # vi /usr/local/mongdb/conf/mongos.conf
- logpath=/opt/mongodb/mongos/log/mongos.log
- pidfilepath=/opt/mongodb/conf/log/mongos.pid
- logappend=truebind_ip=172.17.0.6port=27419fork=truemaxConns=2000configdb=configs/172.17.0.4:27017,172.17.0.5:27018,172.17.0.6:27019 #Config Server的地址
shard
- # vi /usr/local/mongdb/conf/shard1.conf
- pidfilepath = /opt/mongodb/shard1/log/shard1.pid
- dbpath = /opt/mongodb/shard1/data
- logpath = /opt/mongodb/shard1/log/shard1.log
- logappend = truebind_ip = 172.17.0.6port = 27119fork = truereplSet=shard1
-
- shardsvr = truemaxConns=20000
配置好后,分别启动Config Server,shard1,shard2,shard3的 mong实例
按照上面介绍的配置副本集方法,分别给Config server,shard1,shard2,shard3配置副本集
最后启动mongos实例,注意命令是mongos 不是mongod
- # mongos -f /usr/local/mongodb/conf/mongos.conf
连接mongos实例启用分片
- # mongo --host 172.17.0.6:27419>sh.addShard("shard1/172.17.0.4:27117,172.17.0.5:27118,172.17.0.6:27119")>sh.addShard("shard1/172.17.0.4:27217,172.17.0.5:27218,172.17.0.6:27219")>sh.addShard("shard1/172.17.0.4:27317,172.17.0.5:27318,172.17.0.6:27319")>db.runCommand( { enablesharding :"testshard"}); //数据库启用分片>db.runCommand( { shardcollection : "testshard.test",key : {id: "hashed"} } ) //表启用分片,并指定片键
到这里,mongodb的分片集群就配置完成了。可以在不同的镜像中启用mongos和keepalived配合实现高可用。