hbase 表数据迁移
2016-04-12 18:23
316 查看
1 CopyTable 工具
用法:
CopyTable is a utility that can copy part or of all of a table, either to the same cluster or another cluster. The target table must first exist. The usage is as follows:
Options:
Args:
tablename Name of table to copy.
Example of copying 'TestTable' to a cluster that uses replication for a 1 hour window:
Caching for the input Scan is configured via
By default, CopyTable utility only copies the latest version of row cells unless
See Jonathan Hsieh's Online HBase Backups with CopyTable blog post for
more on CopyTable.
2 Export和Import工具
Export is a utility that will dump the contents of table to HDFS in a sequence file. Invoke via:
Note: caching for the input Scan is configured via
Import is a utility that will load data that has been exported back into HBase. Invoke via:
To import 0.94 exported files in a 0.96 cluster or onwards, you need to set system property "hbase.import.version" when running the import command as below:
export带时间范围的具体用法: hbase org.apache.hadoop.hbase.mapreduce.Export member5 hdfs://master24:9000/user/hadoop/dump2 1 1401938590466 1401938590467
导出路径为HDFS路径,写全路径。
导入的表必须存在预先定义好。
用法:
CopyTable is a utility that can copy part or of all of a table, either to the same cluster or another cluster. The target table must first exist. The usage is as follows:
$ bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable [--starttime=X] [--endtime=Y] [--new.name=NEW] [--peer.adr=ADR] tablename
Options:
starttimeBeginning of the time range. Without endtime means starttime to forever.
endtimeEnd of the time range. Without endtime means starttime to forever.
versionsNumber of cell versions to copy.
new.nameNew table's name.
peer.adrAddress of the peer cluster given in the format hbase.zookeeper.quorum:hbase.zookeeper.client.port:zookeeper.znode.parent
familiesComma-separated list of ColumnFamilies to copy.
all.cellsAlso copy delete markers and uncollected deleted cells (advanced option).
Args:
tablename Name of table to copy.
Example of copying 'TestTable' to a cluster that uses replication for a 1 hour window:
$ bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable --starttime=1265875194289 --endtime=1265878794289 --peer.adr=server1,server2,server3:2181:/hbase TestTable
Scanner Caching
Caching for the input Scan is configured via hbase.client.scanner.cachingin the job configuration.
Versions
By default, CopyTable utility only copies the latest version of row cells unless --versions=nis explicitly specified in the command.
See Jonathan Hsieh's Online HBase Backups with CopyTable blog post for
more on CopyTable.
2 Export和Import工具
Export is a utility that will dump the contents of table to HDFS in a sequence file. Invoke via:
$ bin/hbase org.apache.hadoop.hbase.mapreduce.Export <tablename> <outputdir> [<versions> [<starttime> [<endtime>]]]
Note: caching for the input Scan is configured via
hbase.client.scanner.cachingin the job configuration.
$ bin/hbase org.apache.hadoop.hbase.mapreduce.Export <tablename> <outputdir> [<versions> [<starttime> [<endtime>]]]
Import is a utility that will load data that has been exported back into HBase. Invoke via:
$ bin/hbase org.apache.hadoop.hbase.mapreduce.Import <tablename> <inputdir>
To import 0.94 exported files in a 0.96 cluster or onwards, you need to set system property "hbase.import.version" when running the import command as below:
$ bin/hbase -Dhbase.import.version=0.94 org.apache.hadoop.hbase.mapreduce.Import <tablename> <inputdir>
export带时间范围的具体用法: hbase org.apache.hadoop.hbase.mapreduce.Export member5 hdfs://master24:9000/user/hadoop/dump2 1 1401938590466 1401938590467
导出路径为HDFS路径,写全路径。
导入的表必须存在预先定义好。
相关文章推荐
- Facebook's New Real-time Messaging System: HBase to Store 135+ Billion Messages a Month
- Hadoop生态上几个技术的关系与区别:hive、pig、hbase 关系与区别
- 基于HBase Thrift接口的一些使用问题及相关注意事项的详解
- 如何解决struts2日期类型转换
- 基于Java实现杨辉三角 LeetCode Pascal's Triangle
- hbase shell基础和常用命令详解
- 手把手教你配置Hbase完全分布式环境
- 实战:在Java Web 项目中使用HBase
- HBase RowKey设计的那些事
- Spark中将对象序列化存储到hdfs
- HBase基本原理
- HBase中的基本概念
- 【原创】基于分布式存储的开源系统在实时数据库海量历史数据存储项目上的预研
- HBase0.96.x开发使用(一)--安装
- 基于外部ZooKeeper的GlusterFS作为分布式文件系统的完全分布式HBase集群安装指南
- 基于solr实现hbase的二级索引
- HBase伪分布式安装
- HBase 快速入门之 --数据模型(逻辑结构)