mongdb 3 安装 分片 复本 搭建
2016-06-22 13:37
211 查看
1.环境
Linux:CentOS 6.5
MongoDB:3.2.7
10.30.44.56 分片1主,分片2从
10.30.44.57 分片2主,分片1从
10.30.44.58 配置、仲裁、路由
2.下载mongodb
https://www.mongodb.com/download-center?jmp=nav#community
[root@VM6-56 ~]$ curl -O https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-3.2.7.tgz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 71.6M 0 527k 0 0 25212 0 0:49:40 0:00:21 0:49:19 23015
将下载的mongodb-linux-x86_64-3.2.7.tgz复制到另外两台机器上
[root@VM6-56 ~]$scp mongodb-linux-x86_64-3.2.7.tgz 10.30.44.57:/root
[root@VM6-56 ~]$scp mongodb-linux-x86_64-3.2.7.tgz 10.30.44.58:/root
3.安装mongodb(在三台机器上都执行)
[root@VM6-56 ~]# groupadd mongodb
[root@VM6-56 ~]# useradd -g mongodb mongodb
[root@VM6-56 ~]# passwd mongodb
[root@VM6-56 ~]# mkdir /u01/mongodb -p
[root@VM6-56 ~]# chown mongodb.mongodb /u01/mongodb
[root@VM6-56 ~]# chown mongodb.mongodb mongodb-linux-x86_64-3.2.7.tgz
[root@VM6-56 ~]# mv mongodb-linux-x86_64-3.2.7.tgz /u01/mongodb
[root@VM6-56 ~]# su - mongodb
[mongodb@VM6-56 ~]$ cd /u01/mongodb
[mongodb@VM6-56 mongodb]$ tar -zxvf mongodb-linux-x86_64-3.2.7.tgz
[mongodb@VM6-56 mongodb]$ mv mongodb-linux-x86_64-rhel62-3.2.7 mongodb327
[mongodb@VM6-56 mongodb]$ cd
[mongodb@VM6-56 mongodb]$ vi .bash_profile
export PATH=/u01/mongodb/mongodb327/bin:$PATH:$HOME/bin
3.创建目录
在三台机器上都执行以下命令:
[mongodb@VM6-56 mongodb]$ pwd
/u01/mongodb
[mongodb@VM6-56 mongodb]$ mkdir mongodb #创建mongodb的base目录
[mongodb@VM6-56 mongodb]$ cd mongodb
[mongodb@VM6-56 mongodb]$ mkdir data logs conf #创建数据 日志 配置目录
以下命令在56 57上执行:
[mongodb@VM6-56 mongodb]$ cd data
[mongodb@VM6-56data]$ mkdir shard1 shard2 #创建分片数据目录
以下命令在58上执行:
[mongodb@VM6-56 mongodb]$ cd data
[mongodb@VM6-56 data]$ mkdir shard1_arb shard2_arb config #创建分片1仲裁、分片2仲裁和配置库数据目录
4.配置文件
mongod的启动需要指定各种参数,这些参数可以写在文件里也可以在命令行指定,这里采用写在文件里的方式:
以下两个配置文件在56 57上执行:
[mongodb@VM6-56 mongodb]$ cd conf
[mongodb@VM6-56 conf]$ vi shard1.conf #在56 57上执行
port = 31001 #端口
dbpath = /u01/mongodb/mongodb/data/shard1 #数据目录
logpath = /u01/mongodb/mongodb/logs/shard1.log #日志文件
logappend = true #日志记录方式
pidfilepath = /u01/mongodb/mongodb/logs/shard1.pid #pid文件位置
directoryperdb = true
replSet = rep1 #副本集名
oplogSize = 1024 #操作日志大小,单位为M
fork = true #独立进程运行
storageEngine=wiredTiger #新存储引擎
shardsvr = true #是不是分片
#journal = true #是否记日志
[mongodb@VM6-56 conf]$ vi shard2.conf #在56 57上执行
port = 31002 #端口
dbpath = /u01/mongodb/mongodb/data/shard2 #数据目录
logpath = /u01/mongodb/mongodb/logs/shard2.log #日志文件
logappend = true #日志记录方式
pidfilepath = /u01/mongodb/mongodb/logs/shard2.pid #pid文件位置
directoryperdb = true
replSet = rep2 #副本集名
oplogSize = 1024 #操作日志大小,单位为M
fork = true #独立进程运行
storageEngine=wiredTiger #新存储引擎
shardsvr = true #是不是分片
#journal = true #是否记日志
以下三个配置文件在58上执行:
[mongodb@VM6-58 conf]$ vi shard1_arb.conf
port = 31001
dbpath = /u01/mongodb/mongodb/data/shard1_arb
logpath = /u01/mongodb/mongodb/logs/shard1_arb.log
logappend = true
pidfilepath = /u01/mongodb/mongodb/logs/shard1_arb.pid
directoryperdb = true
replSet = rep1
oplogSize = 1024
fork = true
storageEngine=wiredTiger
shardsvr = true
#journal = true
[mongodb@VM6-58 conf]$ vi shard2_arb.conf
port = 31002
dbpath = /u01/mongodb/mongodb/data/shard2_arb
logpath = /u01/mongodb/mongodb/logs/shard2_arb.log
logappend = true
pidfilepath = /u01/mongodb/mongodb/logs/shard2_arb.pid
directoryperdb = true
replSet = rep2
oplogSize = 1024
fork = true
storageEngine=wiredTiger
shardsvr = true
#journal = true
[mongodb@VM6-58 conf]$ vi config.conf
port = 31003
dbpath = /u01/mongodb/mongodb/data/config
logpath = /u01/mongodb/mongodb/logs/config.log
logappend = true
pidfilepath = /u01/mongodb/mongodb/logs/config.pid
directoryperdb = true
oplogSize = 1024
fork = true
storageEngine=wiredTiger
configsvr = true
#journal = true
[mongodb@VM6-58 conf]$ vi mongos.conf
port = 31010
#这没有数据目录
logpath = /u01/mongodb/mongodb/logs/mongos.log
logappend = true
pidfilepath = /u01/mongodb/mongodb/logs/mongos.pid
fork = true
configdb = 10.30.44.58:31003 #配置服务器
至此为止,一共在三台机器上产生了八个配置文件:
56上:shard1.conf shard2.conf
57上:shard1.conf shard2.conf
58上:shard1_arb.conf shard2_arb.conf config.conf mongos.conf
这些文件的内容大部分一样,只是config.conf里没有repSet项,因为配置服务器存放分片信息,与复本无关。路由服务器不存放数据,只使用内存,所以没有数据文件。这里配置参数只是简单的示例参数,可以使用集群工作,在生产环境上还要配置更多的参数。
5.运行
在56上,启动第一个:
[mongodb@VM6-56 ~]$ mongod -f /u01/mongodb/mongodb/conf/shard1.conf
about to fork child process, waiting until server is ready for connections.
forked process: 18789
child process started successfully, parent exiting
启动成功,查看日志,日志文件所在的位置在参数文件里有指定:
[mongodb@VM6-56 logs]$ cat shard1.log
2016-06-25T20:36:24.508+0800 I CONTROL [initandlisten] MongoDB starting : pid=19659 port=31001 dbpath=/u01/mongodb/mongodb/data/shard1 64-bit host=VM6-56
2016-06-25T20:36:24.508+0800 I CONTROL [initandlisten] db version v3.2.7
2016-06-25T20:36:24.508+0800 I CONTROL [initandlisten] git version: 4249c1d2b5999ebbf1fdf3bc0e0e3b3ff5c0aaf2
2016-06-25T20:36:24.508+0800 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013
2016-06-25T20:36:24.508+0800 I CONTROL [initandlisten] allocator: tcmalloc
2016-06-25T20:36:24.508+0800 I CONTROL [initandlisten] modules: none
2016-06-25T20:36:24.508+0800 I CONTROL [initandlisten] build environment:
2016-06-25T20:36:24.508+0800 I CONTROL [initandlisten] distmod: rhel62
2016-06-25T20:36:24.508+0800 I CONTROL [initandlisten] distarch: x86_64
2016-06-25T20:36:24.508+0800 I CONTROL [initandlisten] target_arch: x86_64
2016-06-25T20:36:24.508+0800 I CONTROL [initandlisten] options: { config: "/u0
4000
1/mongodb/mongodb/conf/shard1.conf", net: { port: 31001 }, processManagement: { fork: true, pidFilePath: "/u01/mongodb/mongodb/logs/shard1.pid" }, replication: { oplogSizeMB: 1024,
replSet: "rep1" }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/u01/mongodb/mongodb/data/shard1", directoryPerDB: true, engine: "wiredTiger" }, systemLog: { destination: "file", logAppend: true, path: "/u01/mongodb/mongodb/logs/shard1.log" }
}
2016-06-25T20:36:24.534+0800 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=18G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten]
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine.
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems:
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options]
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten]
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten]
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten]
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 655350 files. Number of processes should be at least 327675 : 0.5 times number of files.
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten]
2016-06-25T20:36:25.175+0800 I REPL [initandlisten] Did not find local voted for document at startup; NoMatchingDocument: Did not find replica set lastVote document in local.replset.election
2016-06-25T20:36:25.175+0800 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset
2016-06-25T20:36:25.176+0800 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/u01/mongodb/mongodb/data/shard1/diagnostic.data'
2016-06-25T20:36:25.176+0800 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2016-06-25T20:36:25.177+0800 I NETWORK [initandlisten] waiting for connections on port 31001
日志里的内容特别重要,需要认真阅读和特别留意,这里重点关注上面日志里的三个警告:
** WARNING: You are running on a NUMA machine.
** We suggest launching mongod like this to avoid performance problems:
** numactl --interleave=all mongod [other options]
** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
** We suggest setting it to 'never'
** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
** We suggest setting it to 'never'
** WARNING: soft rlimits too low. rlimits set to 1024 processes, 655350 files. Number of processes should be at least 327675 : 0.5 times number of files.
解决第二、三警告,切换到root用户执行:
[root@VM6-56 ~]# echo "never" > /sys/kernel/mm/transparent_hugepage/enabled
[root@VM6-56 ~]# echo "never" > /sys/kernel/mm/transparent_hugepage/defrag
解决第四个警告,切换到mongodb用户执行:
[mongodb@VM6-56 ~]$ vi .bash_profile #在文件最后加:
ulimit -f unlimited -t unlimited -v unlimited -n 64000 -u 64000
解决第一个警告,是改变mongod的启动方式:
[mongodb@VM6-56 ~]$ numactl --interleave=all mongod -f /u01/mongodb/mongodb/conf/shard1.conf
about to fork child process, waiting until server is ready for connections.
forked process: 6426
ERROR: child process failed, exited with error number 48
报错,查看日志:
[mongodb@VM6-56 logs]$ tail /u01/mongodb/mongodb/logs/shard1.log
2016-06-26T14:31:46.500+0800 E NETWORK [initandlisten] listen(): bind() failed errno:98 Address already in use for socket: 0.0.0.0:31001
2016-06-26T14:31:46.500+0800 E NETWORK [initandlisten]
addr already in use
2016-06-26T14:31:46.500+0800 E STORAGE [initandlisten] Failed to set up sockets during startup.
2016-06-26T14:31:46.500+0800 I CONTROL [initandlisten] dbexit: rc: 48
通过日志可知错误原因是addr already in use,这是由于mongod进程正在运行需要先关闭:
[root@VM6-56 ~]# ps -ef | grep mongo
root 8083 6176 0 14:40 pts/0 00:00:00 grep mongo
mongodb 18813 1 0 Jun25 ? 00:04:11 mongod -f /u01/mongodb/mongodb/conf/shard2.conf
mongodb 19659 1 0 Jun25 ? 00:04:10 mongod -f /u01/mongodb/mongodb/conf/shard1.conf
[root@VM6-56 ~]# kill 19659
[root@VM6-56 ~]# ps -ef | grep mongo
root 8182 6176 0 14:40 pts/0 00:00:00 grep mongo
mongodb 18813 1 0 Jun25 ? 00:04:11 mongod -f /u01/mongodb/mongodb/conf/shard2.conf
再次启动:
[mongodb@VM6-56 ~]$ numactl --interleave=all mongod -f /u01/mongodb/mongodb/conf/shard1.conf
about to fork child process, waiting until server is ready for connections.
forked process: 8699
child process started successfully, parent exiting
启动成功,再次查看日志,看看还有警告信息没有:
2016-06-26T14:43:18.705+0800 I CONTROL [initandlisten] MongoDB starting : pid=8699 port=31001 dbpath=/u01/mongodb/mongodb/data/shard1 64-bit host=VM6-56
2016-06-26T14:43:18.705+0800 I CONTROL [initandlisten] db version v3.2.7
2016-06-26T14:43:18.705+0800 I CONTROL [initandlisten] git version: 4249c1d2b5999ebbf1fdf3bc0e0e3b3ff5c0aaf2
2016-06-26T14:43:18.705+0800 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013
2016-06-26T14:43:18.705+0800 I CONTROL [initandlisten] allocator: tcmalloc
2016-06-26T14:43:18.705+0800 I CONTROL [initandlisten] modules: none
2016-06-26T14:43:18.705+0800 I CONTROL [initandlisten] build environment:
2016-06-26T14:43:18.705+0800 I CONTROL [initandlisten] distmod: rhel62
2016-06-26T14:43:18.705+0800 I CONTROL [initandlisten] distarch: x86_64
2016-06-26T14:43:18.705+0800 I CONTROL [initandlisten] target_arch: x86_64
2016-06-26T14:43:18.706+0800 I CONTROL [initandlisten] options: { config: "/u01/mongodb/mongodb/conf/shard1.conf", net: { port: 31001 }, processManagement: { fork: true, pidFilePath: "/u01/mongodb/mongodb/logs/shard1.pid" }, replication: { oplogSizeMB: 1024,
replSet: "rep1" }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/u01/mongodb/mongodb/data/shard1", directoryPerDB: true, engine: "wiredTiger" }, systemLog: { destination: "file", logAppend: true, path: "/u01/mongodb/mongodb/logs/shard1.log" }
}
2016-06-26T14:43:18.756+0800 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=18G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
2016-06-26T14:43:19.452+0800 I REPL [initandlisten] Did not find local voted for document at startup; NoMatchingDocument: Did not find replica set lastVote document in local.replset.election
2016-06-26T14:43:19.452+0800 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset
2016-06-26T14:43:19.453+0800 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/u01/mongodb/mongodb/data/shard1/diagnostic.data'
2016-06-26T14:43:19.453+0800 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2016-06-26T14:43:19.455+0800 I NETWORK [initandlisten] waiting for connections on port 31001
(END)
这次没有之前的警告信息了。
在56上,启动第二个:
[mongodb@VM6-56 ~]$ numactl --interleave=all mongod -f /u01/mongodb/mongodb/conf/shard2.conf
about to fork child process, waiting until server is ready for connections.
forked process: 10906
child process started successfully, parent exiting
按照以上的方式和处理警告的方法启动57和58上的mongod:
[mongodb@VM6-57 ~]$ numactl --interleave=all mongod -f /u01/mongodb/mongodb/conf/shard1.conf
[mongodb@VM6-57 ~]$ numactl --interleave=all mongod -f /u01/mongodb/mongodb/conf/shard2.conf
[mongodb@VM6-58 ~]$ numactl --interleave=all mongod -f /u01/mongodb/mongodb/conf/shard1_arb.conf
[mongodb@VM6-58 ~]$ numactl --interleave=all mongod -f /u01/mongodb/mongodb/conf/shard2_arb.conf
[mongodb@VM6-58 ~]$ numactl --interleave=all mongod -f /u01/mongodb/mongodb/conf/config.conf
6.初始化复本集
[mongodb@VM6-56 ~]$ mongo --port 31001 admin #登录admin库
> db.runCommand({"replSetInitiate" : {"_id" : "rep1", "members" : [{"_id" : 0, "host" : "10.30.44.56:31001", "priority" : 10}, {"_id" : 1, "host" : "10.30.44.57:31001", "priority" : 9}, {"_id" : 2, "host" : "10.30.44.58:31001", "arbiterOnly"
: true}]}});
{ "ok" : 1 } #初始化第一个复本集
rep1:OTHER>
rep1:SECONDARY>
rep1:PRIMARY> #我们是在56上执行的初始化,而这个复本集56上的复本优化级最高是10,所以mongod的最后状态为PRIMARY
rep1:PRIMARY> rs.status() #验证一下复本集状态
{
"set" : "rep1",
"date" : ISODate("2016-06-26T08:23:58.648Z"),
"myState" : 1,
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
{
"_id" : 0,
"name" : "10.30.44.56:31001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 6040,
"optime" : {
"ts" : Timestamp(1466928690, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2016-06-26T08:11:30Z"),
"electionTime" : Timestamp(1466928689, 1),
"electionDate" : ISODate("2016-06-26T08:11:29Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 1,
"name" : "10.30.44.57:31001",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 760,
"optime" : {
"ts" : Timestamp(1466928690, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2016-06-26T08:11:30Z"),
"lastHeartbeat" : ISODate("2016-06-26T08:23:57.916Z"),
"lastHeartbeatRecv" : ISODate("2016-06-26T08:23:57.662Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "10.30.44.56:31001",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "10.30.44.58:31001",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 760,
"lastHeartbeat" : ISODate("2016-06-26T08:23:57.916Z"),
"lastHeartbeatRecv" : ISODate("2016-06-26T08:23:55.397Z"),
"pingMs" : NumberLong(0),
"configVersion" : 1
}
],
"ok" : 1
}
rep1:PRIMARY> rs.config() #验证一下复本集的优先级
{
"_id" : "rep1",
"version" : 1,
"protocolVersion" : NumberLong(1),
"members" : [
{
"_id" : 0,
"host" : "10.30.44.56:31001",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 10,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "10.30.44.57:31001",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 9,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 2,
"host" : "10.30.44.58:31001",
"arbiterOnly" : true,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("576f8e26a0c4408c639a74ef")
}
}
[mongodb@VM6-56 ~]$ mongo --port 31002 admin
> db.runCommand({"replSetInitiate" : {"_id" : "rep2", "members" : [{"_id" : 0, "host" : "10.30.44.56:31002", "priority" : 9}, {"_id" : 1, "host" : "10.30.44.57:31002", "priority" : 10}, {"_id" : 2, "host" : "10.30.44.58:31002", "arbiterOnly"
: true}]}});
{ "ok" : 1 }
rep2:OTHER>
rep2:SECONDARY>
rep2:SECONDARY> #我们是在56上执行的初始化,而这个复本集56上的复本优化级是9,所以mongod的最后状态为SECONDARY
rep2:PRIMARY> rs.status() #验证一下复本集状态
rep2:PRIMARY> rs.config() #验证一下复本集的优先级
7.配置sharding(分片)
56,57上存在数据,所以把路由服务放在58上,以下命令在58上执行:
[mon
ceb1
godb@VM6-58 ~]$ mongos -f /u01/mongodb/mongodb/conf/mongos.conf #启动路由服务
2016-06-28T11:38:54.920+0800 W SHARDING [main] Running a sharded cluster with fewer than 3 config servers should only be done for testing purposes and is not recommended for production.
about to fork child process, waiting until server is ready for connections.
forked process: 22219
child process started successfully, parent exiting
[mongodb@VM6-58 logs]$ mongo --port 31010 #登录mongos路由服务
MongoDB shell version: 3.2.7
connecting to: 127.0.0.1:31010/test
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5771f14fe8363e34299ca1e5")
}
shards: #没有分片存在
active mongoses:
"3.2.7" : 1
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases: #没有数据库存在
mongos> use admin
switched to db admin
mongos> db.runCommand({"addshard" : "rep1/10.30.44.56:31001,10.30.44.56:31001","name" : "shard1"}); #增加第一个分片
{ "shardAdded" : "shard1", "ok" : 1 }
mongos> db.runCommand({"addshard" : "rep2/10.30.44.56:31002,10.30.44.56:31002","name" : "shard2"}); #增加第二个分片
{ "shardAdded" : "shard2", "ok" : 1 }
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5771f14fe8363e34299ca1e5")
}
shards: #现在有两个分片,每个分片由一个复本集组成
{ "_id" : "shard1", "host" : "rep1/10.30.44.56:31001,10.30.44.57:31001" }
{ "_id" : "shard2", "host" : "rep2/10.30.44.56:31002,10.30.44.57:31002" }
active mongoses:
"3.2.7" : 1
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
mongos> db.adminCommand({"enablesharding" : "sh_test"}); #使库sh_test可以分片,此时sh_test可以不存在
{ "ok" : 1 }
mongos> db.adminCommand({"shardcollection" : "sh_test.test", "key" : {"time" : 1}}); #使sh_test库下的test表可以分片,分片键是time字段
{ "collectionsharded" : "sh_test.test", "ok" : 1 }
生成测试数据验证一下:
mongos> use sh_test
switched to db sh_test
mongos> for (var i = 1; i <= 9000000; i++) {
... db.test.insert( { x : i , name: "MACLEAN"+i , time: Date()} )
... }
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5771f14fe8363e34299ca1e5")
}
shards:
{ "_id" : "shard1", "host" : "rep1/10.30.44.56:31001,10.30.44.57:31001" }
{ "_id" : "shard2", "host" : "rep2/10.30.44.56:31002,10.30.44.57:31002" }
active mongoses:
"3.2.7" : 1
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
5 : Success
databases: #现在test表里的数据确实分在了shard1、shard2两台机器上了
{ "_id" : "sh_test", "primary" : "shard1", "partitioned" : true }
sh_test.test
shard key: { "time" : 1 }
unique: false
balancing: true
chunks:
shard1
6
shard2
5
{ "time" : { "$minKey" : 1 } } -->> { "time" : "Tue Jun 28 2016 14:01:23 GMT+0800 (CST)" } on : shard2 Timestamp(2, 0)
{ "time" : "Tue Jun 28 2016 14:01:23 GMT+0800 (CST)" } -->> { "time" : "Tue Jun 28 2016 14:01:25 GMT+0800 (CST)" } on : shard2 Timestamp(3, 0)
{ "time" : "Tue Jun 28 2016 14:01:25 GMT+0800 (CST)" } -->> { "time" : "Tue Jun 28 2016 14:02:54 GMT+0800 (CST)" } on : shard2 Timestamp(4, 0)
{ "time" : "Tue Jun 28 2016 14:02:54 GMT+0800 (CST)" } -->> { "time" : "Tue Jun 28 2016 14:04:37 GMT+0800 (CST)" } on : shard2 Timestamp(5, 0)
{ "time" : "Tue Jun 28 2016 14:04:37 GMT+0800 (CST)" } -->> { "time" : "Tue Jun 28 2016 14:06:06 GMT+0800 (CST)" } on : shard2 Timestamp(6, 0)
{ "time" : "Tue Jun 28 2016 14:06:06 GMT+0800 (CST)" } -->> { "time" : "Tue Jun 28 2016 14:07:53 GMT+0800 (CST)" } on : shard1 Timestamp(6, 1)
{ "time" : "Tue Jun 28 2016 14:07:53 GMT+0800 (CST)" } -->> { "time" : "Tue Jun 28 2016 14:09:23 GMT+0800 (CST)" } on : shard1 Timestamp(4, 2)
{ "time" : "Tue Jun 28 2016 14:09:23 GMT+0800 (CST)" } -->> { "time" : "Tue Jun 28 2016 14:11:36 GMT+0800 (CST)" } on : shard1 Timestamp(4, 3)
{ "time" : "Tue Jun 28 2016 14:11:36 GMT+0800 (CST)" } -->> { "time" : "Tue Jun 28 2016 14:13:07 GMT+0800 (CST)" } on : shard1 Timestamp(5, 2)
{ "time" : "Tue Jun 28 2016 14:13:07 GMT+0800 (CST)" } -->> { "time" : "Tue Jun 28 2016 14:15:53 GMT+0800 (CST)" } on : shard1 Timestamp(5, 3)
{ "time" : "Tue Jun 28 2016 14:15:53 GMT+0800 (CST)" } -->> { "time" : { "$maxKey" : 1 } } on : shard1 Timestamp(5, 4)
完成。
Linux:CentOS 6.5
MongoDB:3.2.7
10.30.44.56 分片1主,分片2从
10.30.44.57 分片2主,分片1从
10.30.44.58 配置、仲裁、路由
2.下载mongodb
https://www.mongodb.com/download-center?jmp=nav#community
[root@VM6-56 ~]$ curl -O https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-3.2.7.tgz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 71.6M 0 527k 0 0 25212 0 0:49:40 0:00:21 0:49:19 23015
将下载的mongodb-linux-x86_64-3.2.7.tgz复制到另外两台机器上
[root@VM6-56 ~]$scp mongodb-linux-x86_64-3.2.7.tgz 10.30.44.57:/root
[root@VM6-56 ~]$scp mongodb-linux-x86_64-3.2.7.tgz 10.30.44.58:/root
3.安装mongodb(在三台机器上都执行)
[root@VM6-56 ~]# groupadd mongodb
[root@VM6-56 ~]# useradd -g mongodb mongodb
[root@VM6-56 ~]# passwd mongodb
[root@VM6-56 ~]# mkdir /u01/mongodb -p
[root@VM6-56 ~]# chown mongodb.mongodb /u01/mongodb
[root@VM6-56 ~]# chown mongodb.mongodb mongodb-linux-x86_64-3.2.7.tgz
[root@VM6-56 ~]# mv mongodb-linux-x86_64-3.2.7.tgz /u01/mongodb
[root@VM6-56 ~]# su - mongodb
[mongodb@VM6-56 ~]$ cd /u01/mongodb
[mongodb@VM6-56 mongodb]$ tar -zxvf mongodb-linux-x86_64-3.2.7.tgz
[mongodb@VM6-56 mongodb]$ mv mongodb-linux-x86_64-rhel62-3.2.7 mongodb327
[mongodb@VM6-56 mongodb]$ cd
[mongodb@VM6-56 mongodb]$ vi .bash_profile
export PATH=/u01/mongodb/mongodb327/bin:$PATH:$HOME/bin
3.创建目录
在三台机器上都执行以下命令:
[mongodb@VM6-56 mongodb]$ pwd
/u01/mongodb
[mongodb@VM6-56 mongodb]$ mkdir mongodb #创建mongodb的base目录
[mongodb@VM6-56 mongodb]$ cd mongodb
[mongodb@VM6-56 mongodb]$ mkdir data logs conf #创建数据 日志 配置目录
以下命令在56 57上执行:
[mongodb@VM6-56 mongodb]$ cd data
[mongodb@VM6-56data]$ mkdir shard1 shard2 #创建分片数据目录
以下命令在58上执行:
[mongodb@VM6-56 mongodb]$ cd data
[mongodb@VM6-56 data]$ mkdir shard1_arb shard2_arb config #创建分片1仲裁、分片2仲裁和配置库数据目录
4.配置文件
mongod的启动需要指定各种参数,这些参数可以写在文件里也可以在命令行指定,这里采用写在文件里的方式:
以下两个配置文件在56 57上执行:
[mongodb@VM6-56 mongodb]$ cd conf
[mongodb@VM6-56 conf]$ vi shard1.conf #在56 57上执行
port = 31001 #端口
dbpath = /u01/mongodb/mongodb/data/shard1 #数据目录
logpath = /u01/mongodb/mongodb/logs/shard1.log #日志文件
logappend = true #日志记录方式
pidfilepath = /u01/mongodb/mongodb/logs/shard1.pid #pid文件位置
directoryperdb = true
replSet = rep1 #副本集名
oplogSize = 1024 #操作日志大小,单位为M
fork = true #独立进程运行
storageEngine=wiredTiger #新存储引擎
shardsvr = true #是不是分片
#journal = true #是否记日志
[mongodb@VM6-56 conf]$ vi shard2.conf #在56 57上执行
port = 31002 #端口
dbpath = /u01/mongodb/mongodb/data/shard2 #数据目录
logpath = /u01/mongodb/mongodb/logs/shard2.log #日志文件
logappend = true #日志记录方式
pidfilepath = /u01/mongodb/mongodb/logs/shard2.pid #pid文件位置
directoryperdb = true
replSet = rep2 #副本集名
oplogSize = 1024 #操作日志大小,单位为M
fork = true #独立进程运行
storageEngine=wiredTiger #新存储引擎
shardsvr = true #是不是分片
#journal = true #是否记日志
以下三个配置文件在58上执行:
[mongodb@VM6-58 conf]$ vi shard1_arb.conf
port = 31001
dbpath = /u01/mongodb/mongodb/data/shard1_arb
logpath = /u01/mongodb/mongodb/logs/shard1_arb.log
logappend = true
pidfilepath = /u01/mongodb/mongodb/logs/shard1_arb.pid
directoryperdb = true
replSet = rep1
oplogSize = 1024
fork = true
storageEngine=wiredTiger
shardsvr = true
#journal = true
[mongodb@VM6-58 conf]$ vi shard2_arb.conf
port = 31002
dbpath = /u01/mongodb/mongodb/data/shard2_arb
logpath = /u01/mongodb/mongodb/logs/shard2_arb.log
logappend = true
pidfilepath = /u01/mongodb/mongodb/logs/shard2_arb.pid
directoryperdb = true
replSet = rep2
oplogSize = 1024
fork = true
storageEngine=wiredTiger
shardsvr = true
#journal = true
[mongodb@VM6-58 conf]$ vi config.conf
port = 31003
dbpath = /u01/mongodb/mongodb/data/config
logpath = /u01/mongodb/mongodb/logs/config.log
logappend = true
pidfilepath = /u01/mongodb/mongodb/logs/config.pid
directoryperdb = true
oplogSize = 1024
fork = true
storageEngine=wiredTiger
configsvr = true
#journal = true
[mongodb@VM6-58 conf]$ vi mongos.conf
port = 31010
#这没有数据目录
logpath = /u01/mongodb/mongodb/logs/mongos.log
logappend = true
pidfilepath = /u01/mongodb/mongodb/logs/mongos.pid
fork = true
configdb = 10.30.44.58:31003 #配置服务器
至此为止,一共在三台机器上产生了八个配置文件:
56上:shard1.conf shard2.conf
57上:shard1.conf shard2.conf
58上:shard1_arb.conf shard2_arb.conf config.conf mongos.conf
这些文件的内容大部分一样,只是config.conf里没有repSet项,因为配置服务器存放分片信息,与复本无关。路由服务器不存放数据,只使用内存,所以没有数据文件。这里配置参数只是简单的示例参数,可以使用集群工作,在生产环境上还要配置更多的参数。
5.运行
在56上,启动第一个:
[mongodb@VM6-56 ~]$ mongod -f /u01/mongodb/mongodb/conf/shard1.conf
about to fork child process, waiting until server is ready for connections.
forked process: 18789
child process started successfully, parent exiting
启动成功,查看日志,日志文件所在的位置在参数文件里有指定:
[mongodb@VM6-56 logs]$ cat shard1.log
2016-06-25T20:36:24.508+0800 I CONTROL [initandlisten] MongoDB starting : pid=19659 port=31001 dbpath=/u01/mongodb/mongodb/data/shard1 64-bit host=VM6-56
2016-06-25T20:36:24.508+0800 I CONTROL [initandlisten] db version v3.2.7
2016-06-25T20:36:24.508+0800 I CONTROL [initandlisten] git version: 4249c1d2b5999ebbf1fdf3bc0e0e3b3ff5c0aaf2
2016-06-25T20:36:24.508+0800 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013
2016-06-25T20:36:24.508+0800 I CONTROL [initandlisten] allocator: tcmalloc
2016-06-25T20:36:24.508+0800 I CONTROL [initandlisten] modules: none
2016-06-25T20:36:24.508+0800 I CONTROL [initandlisten] build environment:
2016-06-25T20:36:24.508+0800 I CONTROL [initandlisten] distmod: rhel62
2016-06-25T20:36:24.508+0800 I CONTROL [initandlisten] distarch: x86_64
2016-06-25T20:36:24.508+0800 I CONTROL [initandlisten] target_arch: x86_64
2016-06-25T20:36:24.508+0800 I CONTROL [initandlisten] options: { config: "/u0
4000
1/mongodb/mongodb/conf/shard1.conf", net: { port: 31001 }, processManagement: { fork: true, pidFilePath: "/u01/mongodb/mongodb/logs/shard1.pid" }, replication: { oplogSizeMB: 1024,
replSet: "rep1" }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/u01/mongodb/mongodb/data/shard1", directoryPerDB: true, engine: "wiredTiger" }, systemLog: { destination: "file", logAppend: true, path: "/u01/mongodb/mongodb/logs/shard1.log" }
}
2016-06-25T20:36:24.534+0800 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=18G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten]
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine.
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems:
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options]
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten]
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten]
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten]
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 655350 files. Number of processes should be at least 327675 : 0.5 times number of files.
2016-06-25T20:36:25.172+0800 I CONTROL [initandlisten]
2016-06-25T20:36:25.175+0800 I REPL [initandlisten] Did not find local voted for document at startup; NoMatchingDocument: Did not find replica set lastVote document in local.replset.election
2016-06-25T20:36:25.175+0800 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset
2016-06-25T20:36:25.176+0800 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/u01/mongodb/mongodb/data/shard1/diagnostic.data'
2016-06-25T20:36:25.176+0800 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2016-06-25T20:36:25.177+0800 I NETWORK [initandlisten] waiting for connections on port 31001
日志里的内容特别重要,需要认真阅读和特别留意,这里重点关注上面日志里的三个警告:
** WARNING: You are running on a NUMA machine.
** We suggest launching mongod like this to avoid performance problems:
** numactl --interleave=all mongod [other options]
** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
** We suggest setting it to 'never'
** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
** We suggest setting it to 'never'
** WARNING: soft rlimits too low. rlimits set to 1024 processes, 655350 files. Number of processes should be at least 327675 : 0.5 times number of files.
解决第二、三警告,切换到root用户执行:
[root@VM6-56 ~]# echo "never" > /sys/kernel/mm/transparent_hugepage/enabled
[root@VM6-56 ~]# echo "never" > /sys/kernel/mm/transparent_hugepage/defrag
解决第四个警告,切换到mongodb用户执行:
[mongodb@VM6-56 ~]$ vi .bash_profile #在文件最后加:
ulimit -f unlimited -t unlimited -v unlimited -n 64000 -u 64000
解决第一个警告,是改变mongod的启动方式:
[mongodb@VM6-56 ~]$ numactl --interleave=all mongod -f /u01/mongodb/mongodb/conf/shard1.conf
about to fork child process, waiting until server is ready for connections.
forked process: 6426
ERROR: child process failed, exited with error number 48
报错,查看日志:
[mongodb@VM6-56 logs]$ tail /u01/mongodb/mongodb/logs/shard1.log
2016-06-26T14:31:46.500+0800 E NETWORK [initandlisten] listen(): bind() failed errno:98 Address already in use for socket: 0.0.0.0:31001
2016-06-26T14:31:46.500+0800 E NETWORK [initandlisten]
addr already in use
2016-06-26T14:31:46.500+0800 E STORAGE [initandlisten] Failed to set up sockets during startup.
2016-06-26T14:31:46.500+0800 I CONTROL [initandlisten] dbexit: rc: 48
通过日志可知错误原因是addr already in use,这是由于mongod进程正在运行需要先关闭:
[root@VM6-56 ~]# ps -ef | grep mongo
root 8083 6176 0 14:40 pts/0 00:00:00 grep mongo
mongodb 18813 1 0 Jun25 ? 00:04:11 mongod -f /u01/mongodb/mongodb/conf/shard2.conf
mongodb 19659 1 0 Jun25 ? 00:04:10 mongod -f /u01/mongodb/mongodb/conf/shard1.conf
[root@VM6-56 ~]# kill 19659
[root@VM6-56 ~]# ps -ef | grep mongo
root 8182 6176 0 14:40 pts/0 00:00:00 grep mongo
mongodb 18813 1 0 Jun25 ? 00:04:11 mongod -f /u01/mongodb/mongodb/conf/shard2.conf
再次启动:
[mongodb@VM6-56 ~]$ numactl --interleave=all mongod -f /u01/mongodb/mongodb/conf/shard1.conf
about to fork child process, waiting until server is ready for connections.
forked process: 8699
child process started successfully, parent exiting
启动成功,再次查看日志,看看还有警告信息没有:
2016-06-26T14:43:18.705+0800 I CONTROL [initandlisten] MongoDB starting : pid=8699 port=31001 dbpath=/u01/mongodb/mongodb/data/shard1 64-bit host=VM6-56
2016-06-26T14:43:18.705+0800 I CONTROL [initandlisten] db version v3.2.7
2016-06-26T14:43:18.705+0800 I CONTROL [initandlisten] git version: 4249c1d2b5999ebbf1fdf3bc0e0e3b3ff5c0aaf2
2016-06-26T14:43:18.705+0800 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013
2016-06-26T14:43:18.705+0800 I CONTROL [initandlisten] allocator: tcmalloc
2016-06-26T14:43:18.705+0800 I CONTROL [initandlisten] modules: none
2016-06-26T14:43:18.705+0800 I CONTROL [initandlisten] build environment:
2016-06-26T14:43:18.705+0800 I CONTROL [initandlisten] distmod: rhel62
2016-06-26T14:43:18.705+0800 I CONTROL [initandlisten] distarch: x86_64
2016-06-26T14:43:18.705+0800 I CONTROL [initandlisten] target_arch: x86_64
2016-06-26T14:43:18.706+0800 I CONTROL [initandlisten] options: { config: "/u01/mongodb/mongodb/conf/shard1.conf", net: { port: 31001 }, processManagement: { fork: true, pidFilePath: "/u01/mongodb/mongodb/logs/shard1.pid" }, replication: { oplogSizeMB: 1024,
replSet: "rep1" }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/u01/mongodb/mongodb/data/shard1", directoryPerDB: true, engine: "wiredTiger" }, systemLog: { destination: "file", logAppend: true, path: "/u01/mongodb/mongodb/logs/shard1.log" }
}
2016-06-26T14:43:18.756+0800 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=18G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
2016-06-26T14:43:19.452+0800 I REPL [initandlisten] Did not find local voted for document at startup; NoMatchingDocument: Did not find replica set lastVote document in local.replset.election
2016-06-26T14:43:19.452+0800 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset
2016-06-26T14:43:19.453+0800 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/u01/mongodb/mongodb/data/shard1/diagnostic.data'
2016-06-26T14:43:19.453+0800 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2016-06-26T14:43:19.455+0800 I NETWORK [initandlisten] waiting for connections on port 31001
(END)
这次没有之前的警告信息了。
在56上,启动第二个:
[mongodb@VM6-56 ~]$ numactl --interleave=all mongod -f /u01/mongodb/mongodb/conf/shard2.conf
about to fork child process, waiting until server is ready for connections.
forked process: 10906
child process started successfully, parent exiting
按照以上的方式和处理警告的方法启动57和58上的mongod:
[mongodb@VM6-57 ~]$ numactl --interleave=all mongod -f /u01/mongodb/mongodb/conf/shard1.conf
[mongodb@VM6-57 ~]$ numactl --interleave=all mongod -f /u01/mongodb/mongodb/conf/shard2.conf
[mongodb@VM6-58 ~]$ numactl --interleave=all mongod -f /u01/mongodb/mongodb/conf/shard1_arb.conf
[mongodb@VM6-58 ~]$ numactl --interleave=all mongod -f /u01/mongodb/mongodb/conf/shard2_arb.conf
[mongodb@VM6-58 ~]$ numactl --interleave=all mongod -f /u01/mongodb/mongodb/conf/config.conf
6.初始化复本集
[mongodb@VM6-56 ~]$ mongo --port 31001 admin #登录admin库
> db.runCommand({"replSetInitiate" : {"_id" : "rep1", "members" : [{"_id" : 0, "host" : "10.30.44.56:31001", "priority" : 10}, {"_id" : 1, "host" : "10.30.44.57:31001", "priority" : 9}, {"_id" : 2, "host" : "10.30.44.58:31001", "arbiterOnly"
: true}]}});
{ "ok" : 1 } #初始化第一个复本集
rep1:OTHER>
rep1:SECONDARY>
rep1:PRIMARY> #我们是在56上执行的初始化,而这个复本集56上的复本优化级最高是10,所以mongod的最后状态为PRIMARY
rep1:PRIMARY> rs.status() #验证一下复本集状态
{
"set" : "rep1",
"date" : ISODate("2016-06-26T08:23:58.648Z"),
"myState" : 1,
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
{
"_id" : 0,
"name" : "10.30.44.56:31001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 6040,
"optime" : {
"ts" : Timestamp(1466928690, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2016-06-26T08:11:30Z"),
"electionTime" : Timestamp(1466928689, 1),
"electionDate" : ISODate("2016-06-26T08:11:29Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 1,
"name" : "10.30.44.57:31001",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 760,
"optime" : {
"ts" : Timestamp(1466928690, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2016-06-26T08:11:30Z"),
"lastHeartbeat" : ISODate("2016-06-26T08:23:57.916Z"),
"lastHeartbeatRecv" : ISODate("2016-06-26T08:23:57.662Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "10.30.44.56:31001",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "10.30.44.58:31001",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 760,
"lastHeartbeat" : ISODate("2016-06-26T08:23:57.916Z"),
"lastHeartbeatRecv" : ISODate("2016-06-26T08:23:55.397Z"),
"pingMs" : NumberLong(0),
"configVersion" : 1
}
],
"ok" : 1
}
rep1:PRIMARY> rs.config() #验证一下复本集的优先级
{
"_id" : "rep1",
"version" : 1,
"protocolVersion" : NumberLong(1),
"members" : [
{
"_id" : 0,
"host" : "10.30.44.56:31001",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 10,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "10.30.44.57:31001",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 9,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 2,
"host" : "10.30.44.58:31001",
"arbiterOnly" : true,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("576f8e26a0c4408c639a74ef")
}
}
[mongodb@VM6-56 ~]$ mongo --port 31002 admin
> db.runCommand({"replSetInitiate" : {"_id" : "rep2", "members" : [{"_id" : 0, "host" : "10.30.44.56:31002", "priority" : 9}, {"_id" : 1, "host" : "10.30.44.57:31002", "priority" : 10}, {"_id" : 2, "host" : "10.30.44.58:31002", "arbiterOnly"
: true}]}});
{ "ok" : 1 }
rep2:OTHER>
rep2:SECONDARY>
rep2:SECONDARY> #我们是在56上执行的初始化,而这个复本集56上的复本优化级是9,所以mongod的最后状态为SECONDARY
rep2:PRIMARY> rs.status() #验证一下复本集状态
rep2:PRIMARY> rs.config() #验证一下复本集的优先级
7.配置sharding(分片)
56,57上存在数据,所以把路由服务放在58上,以下命令在58上执行:
[mon
ceb1
godb@VM6-58 ~]$ mongos -f /u01/mongodb/mongodb/conf/mongos.conf #启动路由服务
2016-06-28T11:38:54.920+0800 W SHARDING [main] Running a sharded cluster with fewer than 3 config servers should only be done for testing purposes and is not recommended for production.
about to fork child process, waiting until server is ready for connections.
forked process: 22219
child process started successfully, parent exiting
[mongodb@VM6-58 logs]$ mongo --port 31010 #登录mongos路由服务
MongoDB shell version: 3.2.7
connecting to: 127.0.0.1:31010/test
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5771f14fe8363e34299ca1e5")
}
shards: #没有分片存在
active mongoses:
"3.2.7" : 1
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases: #没有数据库存在
mongos> use admin
switched to db admin
mongos> db.runCommand({"addshard" : "rep1/10.30.44.56:31001,10.30.44.56:31001","name" : "shard1"}); #增加第一个分片
{ "shardAdded" : "shard1", "ok" : 1 }
mongos> db.runCommand({"addshard" : "rep2/10.30.44.56:31002,10.30.44.56:31002","name" : "shard2"}); #增加第二个分片
{ "shardAdded" : "shard2", "ok" : 1 }
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5771f14fe8363e34299ca1e5")
}
shards: #现在有两个分片,每个分片由一个复本集组成
{ "_id" : "shard1", "host" : "rep1/10.30.44.56:31001,10.30.44.57:31001" }
{ "_id" : "shard2", "host" : "rep2/10.30.44.56:31002,10.30.44.57:31002" }
active mongoses:
"3.2.7" : 1
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
mongos> db.adminCommand({"enablesharding" : "sh_test"}); #使库sh_test可以分片,此时sh_test可以不存在
{ "ok" : 1 }
mongos> db.adminCommand({"shardcollection" : "sh_test.test", "key" : {"time" : 1}}); #使sh_test库下的test表可以分片,分片键是time字段
{ "collectionsharded" : "sh_test.test", "ok" : 1 }
生成测试数据验证一下:
mongos> use sh_test
switched to db sh_test
mongos> for (var i = 1; i <= 9000000; i++) {
... db.test.insert( { x : i , name: "MACLEAN"+i , time: Date()} )
... }
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5771f14fe8363e34299ca1e5")
}
shards:
{ "_id" : "shard1", "host" : "rep1/10.30.44.56:31001,10.30.44.57:31001" }
{ "_id" : "shard2", "host" : "rep2/10.30.44.56:31002,10.30.44.57:31002" }
active mongoses:
"3.2.7" : 1
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
5 : Success
databases: #现在test表里的数据确实分在了shard1、shard2两台机器上了
{ "_id" : "sh_test", "primary" : "shard1", "partitioned" : true }
sh_test.test
shard key: { "time" : 1 }
unique: false
balancing: true
chunks:
shard1
6
shard2
5
{ "time" : { "$minKey" : 1 } } -->> { "time" : "Tue Jun 28 2016 14:01:23 GMT+0800 (CST)" } on : shard2 Timestamp(2, 0)
{ "time" : "Tue Jun 28 2016 14:01:23 GMT+0800 (CST)" } -->> { "time" : "Tue Jun 28 2016 14:01:25 GMT+0800 (CST)" } on : shard2 Timestamp(3, 0)
{ "time" : "Tue Jun 28 2016 14:01:25 GMT+0800 (CST)" } -->> { "time" : "Tue Jun 28 2016 14:02:54 GMT+0800 (CST)" } on : shard2 Timestamp(4, 0)
{ "time" : "Tue Jun 28 2016 14:02:54 GMT+0800 (CST)" } -->> { "time" : "Tue Jun 28 2016 14:04:37 GMT+0800 (CST)" } on : shard2 Timestamp(5, 0)
{ "time" : "Tue Jun 28 2016 14:04:37 GMT+0800 (CST)" } -->> { "time" : "Tue Jun 28 2016 14:06:06 GMT+0800 (CST)" } on : shard2 Timestamp(6, 0)
{ "time" : "Tue Jun 28 2016 14:06:06 GMT+0800 (CST)" } -->> { "time" : "Tue Jun 28 2016 14:07:53 GMT+0800 (CST)" } on : shard1 Timestamp(6, 1)
{ "time" : "Tue Jun 28 2016 14:07:53 GMT+0800 (CST)" } -->> { "time" : "Tue Jun 28 2016 14:09:23 GMT+0800 (CST)" } on : shard1 Timestamp(4, 2)
{ "time" : "Tue Jun 28 2016 14:09:23 GMT+0800 (CST)" } -->> { "time" : "Tue Jun 28 2016 14:11:36 GMT+0800 (CST)" } on : shard1 Timestamp(4, 3)
{ "time" : "Tue Jun 28 2016 14:11:36 GMT+0800 (CST)" } -->> { "time" : "Tue Jun 28 2016 14:13:07 GMT+0800 (CST)" } on : shard1 Timestamp(5, 2)
{ "time" : "Tue Jun 28 2016 14:13:07 GMT+0800 (CST)" } -->> { "time" : "Tue Jun 28 2016 14:15:53 GMT+0800 (CST)" } on : shard1 Timestamp(5, 3)
{ "time" : "Tue Jun 28 2016 14:15:53 GMT+0800 (CST)" } -->> { "time" : { "$maxKey" : 1 } } on : shard1 Timestamp(5, 4)
完成。
相关文章推荐
- NOIP2010TG引水入城
- NoSQL——简介
- epoll之一:epoll的原理
- brew使用出错和更新慢的解决方法
- 经典SQL语句大全(绝对的经典)
- japan life 1
- C/C++基础——set的基本操作总结
- 工业以太网与现场总线,谁将成为主流?
- 查看局域网内某一电脑的ip地址
- 查看局域网内某一电脑的ip地址
- 查看局域网内某一电脑的ip地址
- 查看局域网内某一电脑的ip地址
- 查看局域网内某一电脑的ip地址
- 查看局域网内某一电脑的ip地址
- 查看局域网内某一电脑的ip地址
- 查看局域网内某一电脑的ip地址
- CBF中for循环变矩阵乘法的思想(arrayfire)--复数矩阵
- 查看局域网内某一电脑的ip地址
- 查看局域网内某一电脑的ip地址
- Java 调用Mysql 存储过程