您的位置:首页 > 产品设计 > UI/UE

dfs常见的配置文件中的value与description

2016-07-19 11:18 453 查看
照抄于网络:

namevaluedescription
dfs.namenode.logging.levelinfoThe logging level for dfs namenode. Other values are "dir"(trace namespace mutations), "block"(trace block under/over replications and blockcreations/deletions), or "all".
dfs.secondary.http.address0.0.0.0:50090The secondary namenode http server address and port. If the port is 0 then the server will start on a free port.
dfs.datanode.address0.0.0.0:50010The address where the datanode server will listen to. If the port is 0 then the server will start on a free port.
dfs.datanode.http.address0.0.0.0:50075The datanode http server address and port. If the port is 0 then the server will start on a free port.
dfs.datanode.ipc.address0.0.0.0:50020The datanode ipc server address and port. If the port is 0 then the server will start on a free port.
dfs.datanode.handler.count3The number of server threads for the datanode.
dfs.http.address0.0.0.0:50070The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port.
dfs.https.enablefalseDecide if HTTPS(SSL) is supported on HDFS
dfs.https.need.client.authfalseWhether SSL client certificate authentication is required
dfs.https.server.keystore.resourcessl-server.xmlResource file from which ssl server keystore information will be extracted
dfs.https.client.keystore.resourcessl-client.xmlResource file from which ssl client keystore information will be extracted
dfs.datanode.https.address0.0.0.0:50475
dfs.https.address0.0.0.0:50470
dfs.datanode.dns.interfacedefaultThe name of the Network Interface from which a data node should report its IP address.
dfs.datanode.dns.nameserverdefaultThe host name or IP address of the name server (DNS) which a DataNode should use to determine the host name used by the NameNode for communication and display purposes.
dfs.replication.considerLoadtrueDecide if chooseTarget considers the target's load or not
dfs.default.chunk.view.size32768The number of bytes to view for a file on the browser.
dfs.datanode.du.reserved0Reserved space in bytes per volume. Always leave this much space free for non dfs use.
dfs.name.dir${hadoop.tmp.dir}/dfs/nameDetermines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.
dfs.name.edits.dir${dfs.name.dir}Determines where on the local filesystem the DFS name node should store the transaction (edits) file. If this is a comma-delimited list of directories then the transaction file is replicated in all of the directories, for redundancy. Default value is same as dfs.name.dir
dfs.web.ugiwebuser,webgroupThe user account used by the web interface. Syntax: USERNAME,GROUP1,GROUP2, ...
dfs.permissionstrueIf "true", enable permission checking in HDFS. If "false", permission checking is turned off, but all other behavior is unchanged. Switching from one parameter value to the other does not change the mode, owner or group of files or directories.
dfs.permissions.supergroupsupergroupThe name of the group of super-users.
dfs.data.dir${hadoop.tmp.dir}/dfs/dataDetermines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. Directories that do not exist are ignored.
dfs.replication3Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time.
dfs.replication.max512Maximal block replication.
dfs.replication.min1Minimal block replication.
dfs.block.size67108864The default block size for new files.
dfs.df.interval60000Disk usage statistics refresh interval in msec.
dfs.client.block.write.retries3The number of retries for writing blocks to the data nodes, before we signal failure to the application.
dfs.blockreport.intervalMsec3600000Determines block reporting interval in milliseconds.
dfs.blockreport.initialDelay0Delay for first block report in seconds.
dfs.heartbeat.interval3Determines datanode heartbeat interval in seconds.
dfs.namenode.handler.count10The number of server threads for the namenode.
dfs.safemode.threshold.pct0.999fSpecifies the percentage of blocks that should satisfy the minimal replication requirement defined by dfs.replication.min. Values less than or equal to 0 mean not to start in safe mode. Values greater than 1 will make safe mode permanent.
dfs.safemode.extension30000Determines extension of safe mode in milliseconds after the threshold level is reached.
dfs.balance.bandwidthPerSec1048576Specifies the maximum amount of bandwidth that each datanode can utilize for the balancing purpose in term of the number of bytes per second.
dfs.hostsNames a file that contains a list of hosts that are permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, all hosts are permitted.
dfs.hosts.excludeNames a file that contains a list of hosts that are not permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, no hosts are excluded.
dfs.max.objects0The maximum number of files, directories and blocks dfs supports. A value of zero indicates no limit to the number of objects that dfs supports.
dfs.namenode.decommission.interval30Namenode periodicity in seconds to check if decommission is complete.
dfs.namenode.decommission.nodes.per.interval5The number of nodes namenode checks if decommission is complete in each dfs.namenode.decommission.interval.
dfs.replication.interval3The periodicity in seconds with which the namenode computes repliaction work for datanodes.
dfs.access.time.precision3600000The access time for HDFS file is precise upto this value. The default value is 1 hour. Setting a value of 0 disables access times for HDFS.
dfs.support.appendfalseDoes HDFS allow appends to files? This is currently set to false because there are bugs in the "append code" and is not supported in any prodction cluster.
docs/hdfs-default.html
这里是hdfs参数的含义。
其中可见
dfs.replication.min
最小副本数
dfs.safemode.threshold.pct
阈值比例

Specifies the percentage of blocks that should satisfy the minimal replication requirement defined by dfs.replication.min. Values less than or equal to 0 mean not to start in safe mode. Values greater than 1 will make safe mode permanent.
指定应有多少比例的数据块满足最小副本数要求。小于等于0意味不进入安全模式,大于1意味一直处于安全模式。

dfs.replication.min 是定义数据块复制的最小复制量、

dfs.safemode.threshold.pct定义当小与一个比例的数据块没有被复制, 那就将系统切换成安全模式, 所以在这里填写的值应该是0~1之间的数, 也就是你所认为系统能安全运行的最小复制延迟量, 如果填写大于或等于1, 那不意味着系统始终在安全模式下, 这样是不能对外提供服务的。 如果该值填写过小, 那需要考虑复制的数据是否安全了, 这个值还是不要改的好,使用默认的参数 99.9%
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: