如何在一台CentOS7服务器上安装Hadoop2.7.0实现三节点集群配置?
- 内容介绍
- 文章标签
- 相关推荐
本文共计817个文字,预计阅读时间需要4分钟。
环境:Windows 7 + VMware 10 + CentOS 7
一、新建三台CentOS 7 64位虚拟机
主机:master,IP:192.168.137.100,用户:root,密码:123456,节点:node1,IP:192.168.13+ 环境win7+vamvare10+centos7一、新建三台centos764位的虚拟机master192.168.137.100root123456node1192.168.13环境win7+vamvare10+centos7
一、新建三台centos7 64位的虚拟机
master 192.168.137.100 root/123456node1 192.168.137.101 root/123456node2 192.168.137.102 root/123456
二、关闭三台虚拟机的防火墙,在每台虚拟机里面执行:
systemctl stop firewalld.servicesystemctl disable firewalld.service
三、在三台虚拟机里面的/etc/hosts添加三行
192.168.137.100 master192.168.137.101 node1192.168.137.102 node2
四、为三台机器设置ssh免密登录
1、CentOS7默认没有启动ssh无密登录,去掉/etc/ssh/sshd_config其中1行的注释,每台服务器都要设置
#PubkeyAuthentication yes然后重启ssh服务
systemctl restart sshd
2、在master机器的/root执行:ssh-keygen -t rsa命令,一直按回车。三台机器都要执行。
[root@master ~]# ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'.Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa.Your public key has been saved in /root/.ssh/id_rsa.pub.The key fingerprint is:SHA256:aMUO8b/EkylqTMb9+71ePnQv0CWQohsaMeAbMH+t87M root@masterThe key's randomart image is:+---[RSA 2048]----+| o ... . || = o= . o || + oo=. . . || =.Boo o . .|| . OoSoB . o || =.+.+ o. ...|| + o o .. +|| . o . ..+.|| E ....+oo|+----[SHA256]-----+3、在master上合并公钥到authorized_keys文件
[root@master ~]# cd /root/.ssh/[root@master .ssh]# lltotal 8-rw-------. 1 root root 1679 Apr 19 11:10 id_rsa-rw-r--r--. 1 root root 393 Apr 19 11:10 id_rsa.pub[root@master .ssh]# cat id_rsa.pub>> authorized_keys4、将master的authorized_keys复制到node1和node2节点
scp /root/.ssh/authorized_keys root@192.168.137.101:/root/.ssh/scp /root/.ssh/authorized_keys root@192.168.137.102:/root/.ssh/5、测试:
[root@master ~]# ssh root@192.168.137.101Last login: Thu Apr 19 11:41:23 2018 from 192.168.137.100[root@node1 ~]#[root@master ~]# ssh root@192.168.137.102Last login: Mon Apr 23 10:40:38 2018 from 192.168.137.1[root@node2 ~]#
五、为三台机器安装jdk
1、jdk下载地址:pan.baidu.com/s/1-fhy_zbGbEXR1SBK8V7aNQ
2、创建目录:/home/java
mkdir -p /home/java3、将下载的文件jdk-7u79-linux-x64.tar.gz,放到/home/java底下,并执行以下命令:
tar -zxf jdk-7u79-linux-x64.tar.gzrm -rf tar -zxf jdk-7u79-linux-x64.tar.gz4、配置环境变量:
vi /etc/profile,添加以下内容
export JAVA_HOME=/home/java/jdk1.7.0_79export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarexport PATH=$PATH:$JAVA_HOME/bin然后:source /etc/profile
测试:
[root@master jdk1.7.0_79]# java -versionjava version "1.7.0_79"Java(TM) SE Runtime Environment (build 1.7.0_79-b15)Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)[root@master jdk1.7.0_79]#
六、安装hadoop 2.7(只在Master服务器解压,再复制到Slave服务器)
1、创建/home/hadoop目录
mkdir -p /home/hadoop2、将hadoop-2.7.0.tar.gz放到/home/hadoop下并解压
tar -zxf hadoop-2.7.0.tar.gz3、在/home/hadoop目录下创建数据存放的文件夹,tmp、hdfs/data、hdfs/name
[root@master hadoop]# mkdir tmp[root@master hadoop]# mkdir -p hdfs/data[root@master hadoop]# mkdir -p hdfs/name4、配置配置/home/hadoop/hadoop-2.7.0/etc/hadoop目录下的core-site.xml
fs.defaultFS hdfs://192.168.137.100:9000 fs.default.name hdfs://192.168.137.100:9000 hadoop.tmp.dir file:/home/hadoop/tmp io.file.buffer.size 131702
5、配置/home/hadoop/hadoop-2.7.0/etc/hadoop目录下的hdfs-site.xml
dfs.namenode.secondary.www.longshidata.com/pages/exchange.html】本文共计817个文字,预计阅读时间需要4分钟。
环境:Windows 7 + VMware 10 + CentOS 7
一、新建三台CentOS 7 64位虚拟机
主机:master,IP:192.168.137.100,用户:root,密码:123456,节点:node1,IP:192.168.13+ 环境win7+vamvare10+centos7一、新建三台centos764位的虚拟机master192.168.137.100root123456node1192.168.13环境win7+vamvare10+centos7
一、新建三台centos7 64位的虚拟机
master 192.168.137.100 root/123456node1 192.168.137.101 root/123456node2 192.168.137.102 root/123456
二、关闭三台虚拟机的防火墙,在每台虚拟机里面执行:
systemctl stop firewalld.servicesystemctl disable firewalld.service
三、在三台虚拟机里面的/etc/hosts添加三行
192.168.137.100 master192.168.137.101 node1192.168.137.102 node2
四、为三台机器设置ssh免密登录
1、CentOS7默认没有启动ssh无密登录,去掉/etc/ssh/sshd_config其中1行的注释,每台服务器都要设置
#PubkeyAuthentication yes然后重启ssh服务
systemctl restart sshd
2、在master机器的/root执行:ssh-keygen -t rsa命令,一直按回车。三台机器都要执行。
[root@master ~]# ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'.Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa.Your public key has been saved in /root/.ssh/id_rsa.pub.The key fingerprint is:SHA256:aMUO8b/EkylqTMb9+71ePnQv0CWQohsaMeAbMH+t87M root@masterThe key's randomart image is:+---[RSA 2048]----+| o ... . || = o= . o || + oo=. . . || =.Boo o . .|| . OoSoB . o || =.+.+ o. ...|| + o o .. +|| . o . ..+.|| E ....+oo|+----[SHA256]-----+3、在master上合并公钥到authorized_keys文件
[root@master ~]# cd /root/.ssh/[root@master .ssh]# lltotal 8-rw-------. 1 root root 1679 Apr 19 11:10 id_rsa-rw-r--r--. 1 root root 393 Apr 19 11:10 id_rsa.pub[root@master .ssh]# cat id_rsa.pub>> authorized_keys4、将master的authorized_keys复制到node1和node2节点
scp /root/.ssh/authorized_keys root@192.168.137.101:/root/.ssh/scp /root/.ssh/authorized_keys root@192.168.137.102:/root/.ssh/5、测试:
[root@master ~]# ssh root@192.168.137.101Last login: Thu Apr 19 11:41:23 2018 from 192.168.137.100[root@node1 ~]#[root@master ~]# ssh root@192.168.137.102Last login: Mon Apr 23 10:40:38 2018 from 192.168.137.1[root@node2 ~]#
五、为三台机器安装jdk
1、jdk下载地址:pan.baidu.com/s/1-fhy_zbGbEXR1SBK8V7aNQ
2、创建目录:/home/java
mkdir -p /home/java3、将下载的文件jdk-7u79-linux-x64.tar.gz,放到/home/java底下,并执行以下命令:
tar -zxf jdk-7u79-linux-x64.tar.gzrm -rf tar -zxf jdk-7u79-linux-x64.tar.gz4、配置环境变量:
vi /etc/profile,添加以下内容
export JAVA_HOME=/home/java/jdk1.7.0_79export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarexport PATH=$PATH:$JAVA_HOME/bin然后:source /etc/profile
测试:
[root@master jdk1.7.0_79]# java -versionjava version "1.7.0_79"Java(TM) SE Runtime Environment (build 1.7.0_79-b15)Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)[root@master jdk1.7.0_79]#
六、安装hadoop 2.7(只在Master服务器解压,再复制到Slave服务器)
1、创建/home/hadoop目录
mkdir -p /home/hadoop2、将hadoop-2.7.0.tar.gz放到/home/hadoop下并解压
tar -zxf hadoop-2.7.0.tar.gz3、在/home/hadoop目录下创建数据存放的文件夹,tmp、hdfs/data、hdfs/name
[root@master hadoop]# mkdir tmp[root@master hadoop]# mkdir -p hdfs/data[root@master hadoop]# mkdir -p hdfs/name4、配置配置/home/hadoop/hadoop-2.7.0/etc/hadoop目录下的core-site.xml
fs.defaultFS hdfs://192.168.137.100:9000 fs.default.name hdfs://192.168.137.100:9000 hadoop.tmp.dir file:/home/hadoop/tmp io.file.buffer.size 131702
5、配置/home/hadoop/hadoop-2.7.0/etc/hadoop目录下的hdfs-site.xml
dfs.namenode.secondary.www.longshidata.com/pages/exchange.html】
