軟體貨櫃主機 新增 Data Node
啟動 Data Node – dna3 $ sudo nano /opt/hosts-0.2 :: 172.17.10.22 dna3 # datanode $ dkcreate a Warning: Permanently added 'dna3,172.17.10.22' (ECDSA) to the list of known hosts. dna3 created $ dkstart a dna3 starting java version "1.7.0_79" Scala compiler version 2.11.5 -- Copyright 2002-2013, LAMP/EPFL $ starthdfs a [重點] 在 NameNode 執行 "hdfs dfsadmin -refreshNodes" 命令, 重整所有 DataNode 資訊
軟體貨櫃主機 設定 HDFS 分散檔案 Data Node 白名單
設定HDFS分散檔案 - Data Node 白名單 $ sudo nano /opt/conf/A/hdfs.allow dna1 dna2 dna3 $ sudo nano /opt/conf/A/hdfs-site.xml :: <configuration> <property> <name>dfs.hosts</name> <value>/opt/conf/A/hdfs.allow</value> </property> </configuration>
啟用 Data Node 白名單 $ ssh nna hdfs dfsadmin -refreshNodes Refresh nodes successful $ ssh nna hdfs dfsadmin -report :: Live datanodes (3): Dead datanodes (1):
測試 Data Node 白名單 $ sudo nano /opt/hosts-0.2 172.17.10.23 obf # datanode $ dkcreate a.hdfs :: Warning: Permanently added 'obf,172.17.10.23' (ECDSA) to the list of known hosts. obf created $ dkstart a.hdfs obf starting java version "1.7.0_79" Scala compiler version 2.11.5 -- Copyright 2002-2013, LAMP/EPFL $ starthdfs a datanode running as process 164. Stop it first. starting datanode, logging to /tmp/hadoop-bigred-datanode-obf.out $ ssh nna hdfs dfsadmin -report
軟體貨櫃主機 設定 Rack Awareness
撰寫 /opt/rack-awareness.sh $ sudo nano /opt/rack-awareness.sh #!/bin/bash # get IP address from the input ipaddr=$1 # select x.y and convert it to x/y segments=`echo $ipaddr | cut -d '.' -f 2-3 --output-delimiter='/' ` echo /${segments} $ sudo chmod +x /opt/rack-awareness.sh
設定 Rack Awareness $ sudo nano /opt/hadoop-2.6.0/etc/hadoop/core-site.xml :: <configuration> <property> <name>fs.default.name</name> <value>hdfs://nna:8020</value> </property> <name>net.topology.script.file.name</name> <value>/opt/rack-awareness.sh</value> </configuration>
啟用 Rack Awareness $ hadoop-daemon.sh stop namenode stopping namenode $ hadoop-daemon.sh start namenode starting namenode, logging to /tmp/hadoop-pi-namenode-nna.out $ hdfs dfsadmin -report :: Name: 172.17.1.20:50010 (dna1) Hostname: nna Rack: /17/1 Decommission Status : Normal Configured Capacity: 3787268096 (3.53 GB) DFS Used: 28672 (28 KB) Non DFS Used: 3023904768 (2.82 GB)
軟體貨櫃主機 建立 Secondary Name Node
Secondary NameNode 運作圖
建立 sna 貨櫃主機 $ sudo nano /opt/hosts-0.2 $ dkcreate a
sna 貨櫃主機 - 設定 hdfs-site.xml $ sudo nano /opt/hadoop-2.6.0/etc/hadoop/hdfs-site.xml :: <property> <name>dfs.namenode.http-address</name> <value>nna:50070</value> </property> <name>dfs.namenode.secondary.http-address</name> <value>sna:50090</value> <name>dfs.namenode.checkpoint.period</name> <value>600</value> <!-- 10 minutes --> </configuration>
sna 貨櫃主機 - 啟動 Secondary Name Node $ ssh sna tree sn sn ├── current │ ├── edits_0000000000000000027-0000000000000000028 │ ├── fsimage_0000000000000000026 :: └── in_use.lock $ tail /tmp/hadoop-pi-secondarynamenode-sna.log org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Uploaded image with txid 28 to namenode at http://nna:50070 in 0.219 seconds