您的位置:首页 > 数据库 > Redis

redis单机安装

2015-12-31 10:11 816 查看
redis单机安装在Linux下安装Redis非常简单,具体步骤如下(官网有说明):1、下载源码,解压缩后编译源码。
tar -zxvf redis-3.0.6.tar.gzmv redis-3.0.6.tar.gz rediscd redismake
2、编译完成后,在Src目录下,有四个可执行文件redis-server、redis-benchmark、redis-cli和redis.conf。然后拷贝到一个目录下。
mkdir /usr/softinstall/redis/xbin
cp redis-server      /usr/softinstall/redis/xbin
cp redis-benchmark  /usr/softinstall/redis/xbin
cp redis-cli       /usr/softinstall/redis/xbin
cp redis.conf      /usr/softinstall/redis/xbin
cd  /usr/softinstall/redis/xbin
3、启动Redis服务。
$ redis-server redis.conf
4、然后用客户端测试一下是否启动成功。
$ redis-cli
redis> set foo bar
OK
redis> get foo
"bar"

Redis配置文件参数说明

1. Redis默认不是以守护进程的方式运行,可以通过该配置项修改,使用yes启用守护进程 daemonize no2. 当Redis以守护进程方式运行时,Redis默认会把pid写入/var/run/redis.pid文件,可以通过pidfile指定 pidfile /var/run/redis.pid3. 指定Redis监听端口,默认端口为6379,作者在自己的一篇博文中解释了为什么选用6379作为默认端口,因为6379在手机按键上MERZ对应的号码,而MERZ取自意大利歌女Alessia Merz的名字 port 63794. 绑定的主机地址 bind 127.0.0.15.当 客户端闲置多长时间后关闭连接,如果指定为0,表示关闭该功能 timeout 3006. 指定日志记录级别,Redis总共支持四个级别:debug、verbose、notice、warning,默认为verbose loglevel verbose7. 日志记录方式,默认为标准输出,如果配置Redis为守护进程方式运行,而这里又配置为日志记录方式为标准输出,则日志将会发送给/dev/null logfile stdout8. 设置数据库的数量,默认数据库为0,可以使用SELECT <dbid>命令在连接上指定数据库id databases 169. 指定在多长时间内,有多少次更新操作,就将数据同步到数据文件,可以多个条件配合 save <seconds> <changes> Redis默认配置文件中提供了三个条件: save 900 1 save 300 10 save 60 10000 分别表示900秒(15分钟)内有1个更改,300秒(5分钟)内有10个更改以及60秒内有10000个更改。 10. 指定存储至本地数据库时是否压缩数据,默认为yes,Redis采用LZF压缩,如果为了节省CPU时间,可以关闭该选项,但会导致数据库文件变的巨大 rdbcompression yes11. 指定本地数据库文件名,默认值为dump.rdb dbfilename dump.rdb12. 指定本地数据库存放目录 dir ./13. 设置当本机为slav服务时,设置master服务的IP地址及端口,在Redis启动时,它会自动从master进行数据同步 slaveof <masterip> <masterport>14. 当master服务设置了密码保护时,slav服务连接master的密码 masterauth <master-password>15. 设置Redis连接密码,如果配置了连接密码,客户端在连接Redis时需要通过AUTH <password>命令提供密码,默认关闭 requirepass foobared16. 设置同一时间最大客户端连接数,默认无限制,Redis可以同时打开的客户端连接数为Redis进程可以打开的最大文件描述符数,如果设置 maxclients 0,表示不作限制。当客户端连接数到达限制时,Redis会关闭新的连接并向客户端返回max number of clients reached错误信息 maxclients 12817. 指定Redis最大内存限制,Redis在启动时会把数据加载到内存中,达到最大内存后,Redis会先尝试清除已到期或即将到期的Key,当此方法处理 后,仍然到达最大内存设置,将无法再进行写入操作,但仍然可以进行读取操作。Redis新的vm机制,会把Key存放内存,Value会存放在swap区 maxmemory <bytes>18. 指定是否在每次更新操作后进行日志记录,Redis在默认情况下是异步的把数据写入磁盘,如果不开启,可能会在断电时导致一段时间内的数据丢失。因为 redis本身同步数据文件是按上面save条件来同步的,所以有的数据会在一段时间内只存在于内存中。默认为no appendonly no19. 指定更新日志文件名,默认为appendonly.aof appendfilename appendonly.aof20. 指定更新日志条件,共有3个可选值: no:表示等操作系统进行数据缓存同步到磁盘(快) always:表示每次更新操作后手动调用fsync()将数据写到磁盘(慢,安全) everysec:表示每秒同步一次(折衷,默认值) appendfsync everysec 21. 指定是否启用虚拟内存机制,默认值为no,简单的介绍一下,VM机制将数据分页存放,由Redis将访问量较少的页即冷数据swap到磁盘上,访问多的页面由磁盘自动换出到内存中(在后面的文章我会仔细分析Redis的VM机制) vm-enabled no22. 虚拟内存文件路径,默认值为/tmp/redis.swap,不可多个Redis实例共享 vm-swap-file /tmp/redis.swap23. 将所有大于vm-max-memory的数据存入虚拟内存,无论vm-max-memory设置多小,所有索引数据都是内存存储的(Redis的索引数据 就是keys),也就是说,当vm-max-memory设置为0的时候,其实是所有value都存在于磁盘。默认值为0 vm-max-memory 024. Redis swap文件分成了很多的page,一个对象可以保存在多个page上面,但一个page上不能被多个对象共享,vm-page-size是要根据存储的 数据大小来设定的,作者建议如果存储很多小对象,page大小最好设置为32或者64bytes;如果存储很大大对象,则可以使用更大的page,如果不 确定,就使用默认值 vm-page-size 3225. 设置swap文件中的page数量,由于页表(一种表示页面空闲或使用的bitmap)是在放在内存中的,,在磁盘上每8个pages将消耗1byte的内存。 vm-pages 13421772826. 设置访问swap文件的线程数,最好不要超过机器的核数,如果设置为0,那么所有对swap文件的操作都是串行的,可能会造成比较长时间的延迟。默认值为4 vm-max-threads 427. 设置在向客户端应答时,是否把较小的包合并为一个包发送,默认为开启 glueoutputbuf yes28. 指定在超过一定的数量或者最大的元素超过某一临界值时,采用一种特殊的哈希算法 hash-max-zipmap-entries 64 hash-max-zipmap-value 51229. 指定是否激活重置哈希,默认为开启(后面在介绍Redis的哈希算法时具体介绍) activerehashing yes30. 指定包含其它的配置文件,可以在同一主机上多个Redis实例之间使用同一份配置文件,而同时各个实例又拥有自己的特定配置文件 include /path/to/local.conf

Centos开机自启动redis1、设置redis.conf中daemonize为yes,确保守护进程开启
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize
yes
2、编写开机自启动脚本vi /etc/init.d/redis,内容如下;
# chkconfig: 2345 10 90
# description: Start and Stop redis
PATH=
/usr/local/bin
:
/sbin
:
/usr/bin
:
/bin
REDISPORT=6379
#实际环境而定
EXEC=
/usr/local/redis/src/redis-server
#实际环境而定
REDIS_CLI=
/usr/local/redis/src/redis-cli
#实际环境而定
PIDFILE=
/var/run/redis
.pid
CONF=
"/usr/local/redis/redis.conf"
#实际环境而定
case
"$1"
in
start)
if
[
-f $PIDFILE ]
then
echo
"$PIDFILE
exists, process is already running or crashed."
else
echo
"Starting
Redis server..."
$EXEC
$CONF
fi
if
[
"$?"
=
"0"
]
then
echo
"Redis
is running..."
fi
;;
stop)
if
[
! -f $PIDFILE ]
then
echo
"$PIDFILE
exists, process is not running."
else
PID=$(
cat
$PIDFILE)
echo
"Stopping..."
$REDIS_CLI
-p $REDISPORT SHUTDOWN
while
[
-x $PIDFILE ]
do
echo
"Waiting
for Redis to shutdown..."
sleep
1
done
echo
"Redis
stopped"
fi
;;
restart|force-reload)
${0}
stop
${0}
start
;;
*)
echo
"Usage:
/etc/init.d/redis {start|stop|restart|force-reload}"
>&2
exit
1
esac
3、写完后保存退出,并设置权限
chmod
+x
/etc/init
.d
/redis
4、启动测试
/etc/init.d/redis start
启动成功会提示如下信息:
Starting Redis server...
Redis is running...
使用redis-cli测试:
[root@hadoop0 ~]# /usr/redisbin/redis-cli
127.0.0.1:6379> set foo bar
OK
127.0.0.1:6379> get foo
"bar"127.0.0.1:6379> exit
5、设置开机自启动
chkconfig redis on
6、测试
[code]# 尝试启动或停止redis
service
redis start
service
redis stop
#
开启服务自启动
chkconfig
redis on
[/code]
redis主从结构搭建主节点:192.168.60.128     hadoop0从节点:192.168.60.135     hadoop1注意:所有从节点的配置都一样方式1:手动修改配置文件1.修改配置文件中的slaveof属性
slaveof hadoop0 6379
2.启动hadoop0上的redis,输入info命令查看配置信息
# Replicationrole:masterconnected_slaves:1slave0:ip=192.168.60.135,port=6379,state=online,offset=1188,lag=0master_repl_offset:1188repl_backlog_active:1repl_backlog_size:1048576repl_backlog_first_byte_offset:2repl_backlog_histlen:1187
3.启动hadoop1上的redis中,输入info命令查看配置信息
# Replicationrole:slavemaster_host:hadoop0master_port:6379master_link_status:upmaster_last_io_seconds_ago:8master_sync_in_progress:0slave_repl_offset:1356slave_priority:100slave_read_only:1connected_slaves:0master_repl_offset:0repl_backlog_active:0repl_backlog_size:1048576repl_backlog_first_byte_offset:0repl_backlog_histlen:0
方式2:动态设置分别启动hadoop0和hadoop1上的redis,进入客户端输入命令
127.0.0.1:6379> slaveof hadoop0 6379127.0.0.1:6379> info
结束和上面的配置是一样的,只是此设置在当前设置有效,redis-server再次启动就失效了。
应对高并发,实现负载均衡,减少redis节点的压力,实现主从结构。主节点负责读写,从节点负责读。
Redis3.0集群部署文档(centos6.5系统)(要让集群正常工作至少需要3个主节点,在这里我们要创建6个redis节点,其中三个为主节点,三个为从节点,对应的redis节点的ip和端口对应关系如下)ip192.168.60.128 hostname hadoop0192.168.60.128:7000192.168.60.128:7001192.168.60.128:7002192.168.60.128:7003192.168.60.128:7004192.168.60.128:70051、下载redis。官网下载3.0.0版本以上,之前2.几的版本不支持集群模式2:上传服务器,解压,编译
tar -zxvf redis-3.0.0.tar.gzmv redis-3.0.0.tar.gz rediscd /usr/softinstall/redismakemake install
3:创建集群需要的目录
mkdir -p /usr/softinstall/redis/clustercd /usr/softinstall/redis/clustermkdir 7000mkdir 7001mkdir 7002mkdir 7003mkdir 7004mkdir 7005
4:修改配置文件redis.conf
cp /usr/softinstall/redis/redis.conf /usr/softinstall/redis/clustervi redis.conf##修改配置文件中的下面选项port 7000daemonizeyescluster-enabled yescluster-config-file nodes.confcluster-node-timeout 5000appendonly yes##修改完redis.conf配置文件中的这些配置项之后把这个配置文件分别拷贝到7000/7001/7002/7003/7004/7005目录下面cp /usr/softinstall/redis/cluster/redis.conf /usr/softinstall/redis/cluster/7000cp /usr/softinstall/redis/cluster/redis.conf /usr/softinstall/redis/cluster/7001cp /usr/softinstall/redis/cluster/redis.conf /usr/softinstall/redis/cluster/7002cp /usr/softinstall/redis/cluster/redis.conf /usr/softinstall/redis/cluster/7003cp /usr/softinstall/redis/cluster/redis.conf /usr/softinstall/redis/cluster/7004cp /usr/softinstall/redis/cluster/redis.conf /usr/softinstall/redis/cluster/7005##注意:拷贝完成之后要修改7001/7002/7003/7004/7005目录下面redis.conf文件中的port参数,分别改为对应的文件夹的名称
5:在src目录下拷贝 redis-server到7000-7005文件夹中
cp /usr/softinstall/redis/cluster/redis-server /usr/softinstall/redis/cluster/7000cp /usr/softinstall/redis/cluster/redis-server /usr/softinstall/redis/cluster/7001cp /usr/softinstall/redis/cluster/redis-server /usr/softinstall/redis/cluster/7002cp /usr/softinstall/redis/cluster/redis-server /usr/softinstall/redis/cluster/7003cp /usr/softinstall/redis/cluster/redis-server /usr/softinstall/redis/cluster/7004cp /usr/softinstall/redis/cluster/redis-server /usr/softinstall/redis/cluster/7005拷贝redis-trib.rb到/usr/softinstall/redis/cluster/目录下cp /usr/softinstall/redis/cluster/redis-trib.rb /usr/softinstall/redis/cluster/
6:分别启动这6个redis实例
cd /usr/softinstall/redis/cluster/7000redis-server redis.confcd /usr/softinstall/redis/cluster/7001redis-server redis.confcd /usr/softinstall/redis/cluster/7002redis-server redis.confcd /usr/softinstall/redis/cluster/7003redis-server redis.confcd /usr/softinstall/redis/cluster/7004redis-server redis.confcd /usr/softinstall/redis/cluster/7005redis-server redis.conf##启动之后使用命令查看redis的启动情况ps-ef|grep redis
7. 启动之后使用命令查看redis的启动情况
[root@hadoop0 cluster.lxk]# ps -ef | grep redisroot 4135 1 0 20:52 ?00:00:00 redis-server *:7000 [cluster]root 4140 1 0 20:53 ?00:00:00 redis-server *:7001 [cluster]root 4145 1 0 20:53 ?00:00:00 redis-server *:7002 [cluster]root 4151 1 0 20:53 ?00:00:00 redis-server *:7003 [cluster]root 4158 1 0 20:54 ?00:00:00 redis-server *:7004 [cluster]root 4164 1 0 20:54 ?00:00:00 redis-server *:7005 [cluster]root 4171 3437 0 20:54 pts/0 00:00:00 grep redis
8:执行redis的创建集群命令创建集群
cd /usr/softinstall/redis/cluster[root@hadoop0 cluster.lxk]# ./redis-trib.rb create --replicas 1 hadoop0:7000 hadoop0:7001 hadoop0:7002 hadoop0:7003 hadoop0:7004 hadoop0:7005/usr/bin/env: ruby: No such file or directory
8.1执行上面的命令的时候可能会报错,因为是执行的ruby的脚本,需要ruby的环境错误内容:/usr/bin/env: ruby: No such file or directory所以需要安装ruby的环境,这里推荐使用yuminstall ruby安装
yum install ruby
8.2然后再执行第6步的创建集群命令,可能还会报错,
提示缺少rubygems组件,使用yum安装错误内容:./redis-trib.rb:24:in `require': no such file to load -- rubygems (LoadError)from ./redis-trib.rb:24
yum install rubygems
8.3再次执行第6步的命令,可能还会报错,
提示不能加载redis,是因为缺少redis和ruby的接口,使用gem 安装错误内容:/usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- redis (LoadError)from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require'from ./redis-trib.rb:25
gem install redis
8.4 再次执行第6步的命令,正常执行输入yes。8.5 出现报错信息
[root@hadoop0 cluster.lxk]# ./redis-trib.rb create --replicas 1 hadoop0:7000 hadoop0:7001 hadoop0:7002 hadoop0:7003 hadoop0:7004 hadoop0:7005>>> Creating cluster>>> Performing hash slots allocation on 6 nodes...Using 3 masters:hadoop0:7000hadoop0:7001hadoop0:7002Adding replica hadoop0:7003 to hadoop0:7000Adding replica hadoop0:7004 to hadoop0:7001Adding replica hadoop0:7005 to hadoop0:7002M: 176b543f8a72949cde08061a10dfab993c0f8268 hadoop0:7000slots:0-5460 (5461 slots) masterM: b41a2a2275bc96bd2624c5461f8546a0fc29bed5 hadoop0:7001slots:5461-10922 (5462 slots) masterM: 36301c1eaf5dddbb6aba8b763a6d7ce3931a9373 hadoop0:7002slots:10923-16383 (5461 slots) masterS: 28f360f009aad115b4ef4f947161726b16c20642 hadoop0:7003replicates 176b543f8a72949cde08061a10dfab993c0f8268S: 24fdb33794927e6464a7e5947a3a1e4778cea1fb hadoop0:7004replicates b41a2a2275bc96bd2624c5461f8546a0fc29bed5S: 5ae5b6ec6ea072a3efd6d42dfdc0350615a3d745 hadoop0:7005replicates 36301c1eaf5dddbb6aba8b763a6d7ce3931a9373Can I set the above configuration? (type 'yes' to accept): yes/usr/lib/ruby/gems/1.8/gems/redis-3.2.2/lib/redis/client.rb:114:in `call': ERR Slot 4648 is already busy (Redis::CommandError)from /usr/lib/ruby/gems/1.8/gems/redis-3.2.2/lib/redis.rb:2646:in `method_missing'from /usr/lib/ruby/gems/1.8/gems/redis-3.2.2/lib/redis.rb:57:in `synchronize'from /usr/lib/ruby/1.8/monitor.rb:242:in `mon_synchronize'from /usr/lib/ruby/gems/1.8/gems/redis-3.2.2/lib/redis.rb:57:in `synchronize'from /usr/lib/ruby/gems/1.8/gems/redis-3.2.2/lib/redis.rb:2645:in `method_missing'from ./redis-trib.rb:212:in `flush_node_config'from ./redis-trib.rb:711:in `flush_nodes_config'from ./redis-trib.rb:710:in `each'from ./redis-trib.rb:710:in `flush_nodes_config'from ./redis-trib.rb:1209:in `create_cluster_cmd'from ./redis-trib.rb:1609:in `send'from ./redis-trib.rb:1609
经检查,这是由于上一次配置集群失败时留下的配置信息导致的。 只要把redis.conf中定义的 cluster-config-file所在的文件删除,重新启动redis-server及运行redis-trib即可。
8.6 报错问题
在执行一下集群命令的时候报错
./redis-trib.rb create --replicas 1 hadoop0:7000 hadoop0:7001 hadoop0:7002 hadoop0:7003 hadoop0:7004 hadoop0:7005
>>> Sending CLUSTER MEET messages to join the cluster/usr/lib/ruby/gems/1.8/gems/redis-3.2.2/lib/redis/client.rb:114:in `call': ERR Invalid node address specified: hadoop0:7000 (Redis::CommandError)from /usr/lib/ruby/gems/1.8/gems/redis-3.2.2/lib/redis.rb:2646:in `method_missing'from /usr/lib/ruby/gems/1.8/gems/redis-3.2.2/lib/redis.rb:57:in `synchronize'from /usr/lib/ruby/1.8/monitor.rb:242:in `mon_synchronize'from /usr/lib/ruby/gems/1.8/gems/redis-3.2.2/lib/redis.rb:57:in `synchronize'from /usr/lib/ruby/gems/1.8/gems/redis-3.2.2/lib/redis.rb:2645:in `method_missing'from ./redis-trib.rb:746:in `join_cluster'from ./redis-trib.rb:744:in `each'from ./redis-trib.rb:744:in `join_cluster'from ./redis-trib.rb:1214:in `create_cluster_cmd'from ./redis-trib.rb:1609:in `send'from ./redis-trib.rb:1609
是redis本身的原因,还没有采用主机名解析的功能,我把hadoop0其改为其IP地址之后就成功了
./redis-trib.rb create --replicas 1 192.168.60.128:7000 192.168.60.128:7001 192.168.60.128:7002 192.168.60.128:7003 192.168.60.128:7004 192.168.60.128:7005
[root@hadoop0 cluster.lxk]# ./redis-trib.rb create --replicas 1 192.168.60.128:7000192.168.60.128:7001 192.168.60.128:7002 192.168.60.128:7003 192.168.60.128:7004 192.168.60.128:7005>>> Creating cluster>>> Performing hash slots allocation on 6 nodes...Using 3 masters:192.168.60.128:7000192.168.60.128:7001192.168.60.128:7002Adding replica 192.168.60.128:7003 to 192.168.60.128:7000Adding replica 192.168.60.128:7004 to 192.168.60.128:7001Adding replica 192.168.60.128:7005 to 192.168.60.128:7002M: f846eefc456c4d58a2e16e0c22cc3efaaeebe2b8 192.168.60.128:7000slots:0-5460 (5461 slots) masterM: 63af0a05b3a4620888fe3144c9a67044ea4b67b5 192.168.60.128:7001slots:5461-10922 (5462 slots) masterM: 19bb8e6aae6f437ca4b2eb08dade49f81ce7d66a 192.168.60.128:7002slots:10923-16383 (5461 slots) masterS: 4fca799bbfcfc75316a7d42e6110fe33dce74c59 192.168.60.128:7003replicates f846eefc456c4d58a2e16e0c22cc3efaaeebe2b8S: d1e239f90808dcd2e7a6ebb49814d9af83c9d3f3 192.168.60.128:7004replicates 63af0a05b3a4620888fe3144c9a67044ea4b67b5S: 5ac0ea2bd96db0c70c290bb5da001ee3526cf90a 192.168.60.128:7005replicates 19bb8e6aae6f437ca4b2eb08dade49f81ce7d66aCan I set the above configuration? (type 'yes' to accept): yes>>> Nodes configuration updated>>> Assign a different config epoch to each node>>> Sending CLUSTER MEET messages to join the clusterWaiting for the cluster to join.....>>> Performing Cluster Check (using node 192.168.60.128:7000)M: f846eefc456c4d58a2e16e0c22cc3efaaeebe2b8 192.168.60.128:7000slots:0-5460 (5461 slots) masterM: 63af0a05b3a4620888fe3144c9a67044ea4b67b5 192.168.60.128:7001slots:5461-10922 (5462 slots) masterM: 19bb8e6aae6f437ca4b2eb08dade49f81ce7d66a 192.168.60.128:7002slots:10923-16383 (5461 slots) masterM: 4fca799bbfcfc75316a7d42e6110fe33dce74c59 192.168.60.128:7003slots: (0 slots) masterreplicates f846eefc456c4d58a2e16e0c22cc3efaaeebe2b8M: d1e239f90808dcd2e7a6ebb49814d9af83c9d3f3 192.168.60.128:7004slots: (0 slots) masterreplicates 63af0a05b3a4620888fe3144c9a67044ea4b67b5M: 5ac0ea2bd96db0c70c290bb5da001ee3526cf90a 192.168.60.128:7005slots: (0 slots) masterreplicates 19bb8e6aae6f437ca4b2eb08dade49f81ce7d66a[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.
集群搭建成功!
Redis3.0集群节点添加1:首先把需要添加的节点启动
在之前集群节点的基础上,cd /usr/softinstall/redis/clustercp -r 7000 7006留下redis.conf 和 redis-server,删除其他文件修改redis.conf文件把中他端口号改为7006 (与文件名相同的)启动节点redis-server redis.conf
2:执行以下命令,将这个新节点添加到集群中
redis-trib.rb add-node 192.168.60.128:7006 192.168.60.128:7001
3:执行命令redis-cli-c -p 7001 cluster nodes,查看刚才新增的节点
[root@hadoop0 cluster.lxk]# redis-cli -c -p 7001 cluster nodes4fca799bbfcfc75316a7d42e6110fe33dce74c59 192.168.60.128:7003 slave f846eefc456c4d58a2e16e0c22cc3efaaeebe2b8 0 1451032623155 4 connected5ac0ea2bd96db0c70c290bb5da001ee3526cf90a 192.168.60.128:7005 slave 19bb8e6aae6f437ca4b2eb08dade49f81ce7d66a 0 1451032624175 6 connectedd1e239f90808dcd2e7a6ebb49814d9af83c9d3f3 192.168.60.128:7004 slave 63af0a05b3a4620888fe3144c9a67044ea4b67b5 0 1451032621110 5 connected4037989e0baafd19604c5654ca76cb8618790232 192.168.60.128:7006 master - 0 1451032622437 7 connected 0-332 5461-5794 10923-11255f846eefc456c4d58a2e16e0c22cc3efaaeebe2b8 192.168.60.128:7000 master - 0 1451032620086 1 connected 333-5460b667fda0212483551a1b1de995ed95927eb6761a 192.168.60.128:7007 master - 0 1451032622133 0 connected19bb8e6aae6f437ca4b2eb08dade49f81ce7d66a 192.168.60.128:7002 master - 0 1451032619678 3 connected 11256-1638363af0a05b3a4620888fe3144c9a67044ea4b67b5 192.168.60.128:7001 myself,master - 0 0 2 connected 5795-10922
4:增加了新的节点之后,这个新的节点可以成为主节点或者是从节点
4.1 把这个节点变成主节点,使用redis-trib程序,将集群中的某些哈希槽移动到新节点里面, 这个新节点就成为真正的主节点了。执行下面的命令对集群中的哈希槽进行移动
./redis-trib.rb reshard 192.168.60.128:7000
系统会提示我们要移动多少哈希槽,这里移动1000个
参考连接
/content/4069509.html(Java对redis的基本操作)/article/4700331.html(Redis学习手册(开篇))/article/5827748.html(Linux下redis开机自动启动(centos))http://my.oschina.net/indestiny/blog/197272?fromerr=PCJLEQ7F(Centos开机自启动redis)/article/4621144.html(Jedis存储Java对象--Java序列化为byte数组方式)http://www.cnblogs.com/stephen-liu74/archive/2012/02/23/2364717.html/article/5273716.html
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: