您的位置:首页 > 其它

毕业5年,决定你的命运

2011-04-23 21:51 302 查看
一. 安装部署
zookeeper集群部署:节点:
10.1.12.51:2181 node110.1.12.52:2181 node210.1.12.53:2181 node3最新稳定版下载地址(当前3.4.6):
http://mirrors.cnnic.cn/apache/zookeeper/stable/zookeeper-3.4.6.tar.gz各节点上解压zookeeper到/usr/local
tar xf /usr/local/src/zookeeper-3.4.6.tar.gz -C /usr/local
cd /usr/local
ln -s zookeeper-3.4.6 zookeeper
创建如下数据目录结构:
/data/zookeeper/├── data└── log
mkdir -p /data/zookeeper/{data,log}

各节点配置zookeeper:
cd /usr/local/zookeeper/conf
cp zoo_sample.cfg zoo.cfg
vim zoo.cfg修改如下:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper/data
clientPort=2181
server.1=10.1.12.51:2888:3888
server.2=10.1.12.52:2888:3888
server.3=10.1.12.53:2888:3888
各节点的数据目录下添加唯一的node id标识
node1:
echo 1>/data/zookeeper/data/myid
node2:
echo 2>/data/zookeeper/data/myid
node3:
echo 3>/data/zookeeper/data/myid
各节点修改ZOO_LOG_DIR环境变量:
echo -e "\nZOO_LOG_DIR=/data/zookeeper/log" >> /usr/local/zookeeper/bin/zkEnv.sh
各节点上启动zookeeper:
/usr/local/zookeeper/bin/zkServer.sh start
添加zookeeper的sysv服务脚本
vim /etc/init.d/zookeeper
#!/bin/bash
#
# chkconfig: 345 30 70
# description: Starts/Stops Apache Zookeeper

export ZOO_HOME=/usr/local/zookeeper
export ZOO_BIN=$ZOO_HOME/bin
export ZOO_SER_BIN=$ZOO_BIN/zkServer.sh

$ZOO_SER_BIN $1
#------finish-------
chmod +x /etc/init.d/zookeeper
chkconfig -add zookeeper
chkconfig zookeeper on

solrcloud部署(3分片2复制集):节点:10.1.12.51:8983 shard1-repl1 shard2-repl2 10.1.12.52:8983 shard2-repl1 shard3-repl210.1.12.53:8983 shard3-repl1 shard1-repl2

最新稳定版下载地址(当前5.3.1)http://mirrors.cnnic.cn/apache/lucene/solr/5.3.1/solr-5.3.1.tgz各节点解压安装操作:
cd /usr/local/src
tar xzf solr-5.3.1.tgz solr-5.3.1/bin/install_solr_service.sh --strip-components=2
./install_solr_service.sh solr-5.3.1.tgz -i /usr/local -d /data/solrcloud -u solr -s solr -p 8983
各节点solr服务配置修改:1.调整jvm堆内存大小
echo 'SOLR_JAVA_MEM="-Xms10g -Xmx10g"' >> /data/solrcloud/solr.in.sh
2.添加zookeeper
echo 'ZK_HOST=10.1.12.51:2181,10.1.12.52:2181,10.1.12.53:2181' >> /data/solrcloud/solr.in.sh
3.添加solr依赖jar包这里使用ik分词器,支持solr5.x版本下载地址为: cp' target='_blank'>https://github.com/EugenePig/ik-analyzer-solr5[code=bash]cp ~/ik-analyzer-solr5-5.x.jar /usr/local/solr/server/solr-webapp/webapp/WEB-INF/lib cp -n /usr/local/solr/dist/*.jar /usr/local/solr/server/solr-webapp/webapp/WEB-INF/lib cp ~/mysql-connector-java-5.1.35.jar /usr/local/solr/server/lib4.重启solr服务
/etc/init.d/solr restart

创建colletion(其中一个节点下操作即可)
su -c '/usr/local/solr/bin/solr create -c core_bingdu -d /opt/core_bingdu_conf -n core_bingdu -s 3 -rf 2 -port 8983' - solr
su -c '/usr/local/solr/bin/solr create -c core_bingdu_user -d /opt/core_bingdu_user_conf -n core_bingdu_user -s 3 -rf 2 -port 8983' - solr

二. 集群操作修改solr的collection配置后上传到zookeeper:
当前solr的collection配置副本存放在opt目录下
[root@mongo-shard1-1 opt]# tree .
.
├── core_bingdu_conf
│   ├── admin-extra.html
│   ├── admin-extra.menu-bottom.html
│   ├── admin-extra.menu-top.html
│   ├── data-config.xml
│   ├── dataimport.properties
│   ├── _rest_managed.json
│   ├── schema.xml
│   └── solrconfig.xml
└── core_bingdu_user_conf
├── admin-extra.html
├── admin-extra.menu-bottom.html
├── admin-extra.menu-top.html
├── data-config.xml
├── dataimport.properties
├── _rest_managed.json
├── schema.xml
└── solrconfig.xml
对以上配置文件进行修改后需要手动上传到zookeeper:
1)整个配置目录上传
cd /usr/local/solr/server/scripts/cloud-scripts
./zkcli.sh -zkhost 10.1.12.51:2181 -cmd upconfig -confdir /opt/core_bingdu_conf -confname core_bingdu
./zkcli.sh -zkhost 10.1.12.51:2181 -cmd upconfig -confdir /opt/core_bingdu_user_conf -confname core_bingdu_user
2)单文件上传
./zkcli.sh -zkhost 10.1.12.51:2181 -cmd putfile /configs/core_bingdu/solrconfig.xml /opt/core_bingdu_conf/solrconfig.xml
./zkcli.sh -zkhost 10.1.12.51:2181 -cmd putfile /configs/core_bingdu_user/solrconfig.xml /opt/core_bingdu_user_conf/solrconfig.xml


三. solr 的一些常规操作:
基于命令:
<solr_install_dir>/bin/solr command [options...]
支持的command如下:
start, stop, restart, status, healthcheck, create, create_core, create_collection, delete
其中start,stop,restart,status操作已经封装到sysv风格的服务脚本中,例如重启solr服务:

/etc/init.d/solr restart
检查solr集群各节点状态:

[root@mongo-shard1-1 bin]# ./solr healthcheck -c core_bingdu -z 10.1.12.52:2181
{
"collection":"core_bingdu",
"status":"healthy",
"numDocs":180900,
"numShards":3,
"shards":[
{
"shard":"shard1",
"status":"healthy",
"replicas":[
{
"name":"core_node2",
"url":"http://10.1.12.51:8983/solr/core_bingdu_shard1_replica2/",
"numDocs":60424,
"status":"active",
"uptime":"2 days, 17 hours, 13 minutes, 9 seconds",
"memory":"3.4 GB (%36) of 9.6 GB",
"leader":true},
{
"name":"core_node5",
"url":"http://10.1.12.53:8983/solr/core_bingdu_shard1_replica1/",
"numDocs":60424,
"status":"active",
"uptime":"2 days, 16 hours, 58 minutes, 39 seconds",
"memory":"2.2 GB (%22.5) of 9.6 GB"}]},
{
"shard":"shard2",
"status":"healthy",
"replicas":[
{
"name":"core_node3",
"url":"http://10.1.12.52:8983/solr/core_bingdu_shard2_replica1/",
"numDocs":59916,
"status":"active",
"uptime":"2 days, 17 hours, 14 minutes, 3 seconds",
"memory":"3 GB (%31.1) of 9.6 GB",
"leader":true},
{
"name":"core_node6",
"url":"http://10.1.12.53:8983/solr/core_bingdu_shard2_replica2/",
"numDocs":59916,
"status":"active",
"uptime":"2 days, 16 hours, 58 minutes, 39 seconds",
"memory":"2.2 GB (%22.5) of 9.6 GB"}]},
{
"shard":"shard3",
"status":"healthy",
"replicas":[
{
"name":"core_node1",
"url":"http://10.1.12.51:8983/solr/core_bingdu_shard3_replica1/",
"numDocs":60560,
"status":"active",
"uptime":"2 days, 17 hours, 13 minutes, 9 seconds",
"memory":"3.5 GB (%36) of 9.6 GB"},
{
"name":"core_node4",
"url":"http://10.1.12.52:8983/solr/core_bingdu_shard3_replica2/",
"numDocs":60560,
"status":"active",
"uptime":"2 days, 17 hours, 14 minutes, 3 seconds",
"memory":"3 GB (%31.2) of 9.6 GB",
"leader":true}]}]}


创建集合:
bin/solr create -c core_bingdu -d /opt/core_bingdu_conf -n core_bingdu -s 3 -rf 2 -port 8983
这里说明一下选项:
-c: 指定集合或core名称(对于单实例模式的solr,colletion和core的意思基本相同,集群模式下,colletion等于每个shard下对应的core的集合)
-d:指定本地的solr配置目录(一个临时目录,用于存放solr配置文件,如solrconfig.xml,schema.xml),每次该目录的配置发生修改都需要手动上传到zookeeper的/configs目录节点下
-n: 指定zookeeper的/configs目录节点下创建的子目录节点名称,这里为/configs/core_bingdu
-s: 指定对应集合的分片数, 指定分片数后solr会根据各个节点上的负载在其上创建分片
-rf: 指定每个shard创建多少个复制节点
-port: 指定solr服务端口,不指定默认为8983

删除集合:

bin/solr delete -c core_bingdu

基于solr api方式的操作:创建集合:
http://localhost:8983/solr/admin/collections?action=CREATE&name=mycollection&numShards=3&replicationFactor=4
删除集合:
http://localhost:8983/solr/admin/collections?action=DELETE&name=mycollection
重新载入集合:
http://localhost:8983/solr/admin/collections?action=RELOAD&name=mycollection
列出所有集合:
http://10.1.12.53:8983/solr/admin/collections?action=LIST
将一个分片拆分成两个分片(用于集群扩展):
http://localhost:8983/solr/admin/collections?action=SPLITSHARD&collection=anotherCollection&shard=shard1
为指定集合创建别名:
http://localhost:8983/solr/admin/collections?action=CREATEALIAS&name=testalias&collections=anotherCollection,testCollection
删除集合别名:
http://localhost:8983/solr/admin/collections?action=DELETEALIAS&name=testalias
删除复制节点:
http://localhost:8983/solr/admin/collections?action=DELETEREPLICA&collection=test2&shard=shard2&replica=core_node3
添加复制节点:
http://localhost:8983/solr/admin/collections?action=ADDREPLICA&collection=test2&shard=shard2&node=192.167.1.2:8983_solr
查看overseer状态:
solr集群中唯一负责action处理的节点,负责其他分片,节点的状态跟踪,为节点分配分片
http://localhost:8983/solr/admin/collections?action=OVERSEERSTATUS&wt=json
集群状态检查:
http://localhost:8983/solr/admin/collections?action=clusterstatus&wt=json
还有其他集合api,详细说明请参考:  https://cwiki.apache.org/confluence/display/solr/Collections+API
创建全量索引
/usr/bin/curl -G "http://10.1.12.53:8983/solr/core_bingdu/dataimport?command=full-import&clean=true&commit=true" 2&> /dev/null
创建增量索引
/usr/bin/curl -G "http://10.1.12.53:8983/solr/core_bingdu_user/dataimport?command=delta-import&clean=false&commit=true" 2&> /dev/null


更多请参阅官方文档: http://mirrors.cnnic.cn/apache/lucene/solr/ref-guide/apache-solr-ref-guide-5.3.pdf
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: