hadoop单机版搭建图文详解
前置条件:
1、ubuntu10.10安装成功(个人认为不必要花太多时间在系统安装上,我们不是为了装机而装机的)
2、jdk安装成功(jdk1.6.0_23for linux版本,图解安装过程http://freewxy.iteye.com/blog/882784 )
3、下载hhadoop0.21.0.tar.gz(http://apache.etoak.com//hadoop/core/hadoop-0.21.0/ )
安装hadoop
1、首先将hadoop0.21.0.tar.gz复制到usr下的local文件夹内,(sudo cp hadoop路径 /usr/local)如图1
http://dl.iteye.com/upload/attachment/474374/b91e6229-2456-399b-83ce-a082a29282d4.png
2、进入到local目录下,解压hadoop0.21.0.tar.gz,如图2
http://dl.iteye.com/upload/attachment/474376/8bcfd794-f30c-3790-8b8f-728f405f6c20.png
3、为方便管理和hadoop版本升级,将解压后的文件夹改名为hadoop,如图3
http://dl.iteye.com/upload/attachment/474378/01417760-6519-3254-84ee-5304bd3a85d8.png
方便起见,新增hadoop的组和其同名用户:
1、创建一个名字为hadoop的用户组,如图4
http://dl.iteye.com/upload/attachment/474380/937d35ae-00b6-343f-8dc8-c6143b152dfc.png
2、创建一个用户名为hadoop的用户,归到hadoop组下,如图5(一些信息可以不填写,直接按enter键即可)如图5
http://dl.iteye.com/upload/attachment/474384/f96b427d-b6f6-3ac7-867f-fda3c08144cd.png
3、(1)添加用户权限:打开etc下的sudoers文件,添加如下(2)命令,如图6
http://dl.iteye.com/upload/attachment/474386/61296a5c-2bbf-371a-b1c9-ba75bc49ef0d.png
--------------------------------------------------------------------------------------------------------------------------------
(另一种方法是先切换到root用户下,然后修改sudoers的权限,但这样操作一定要小心谨慎,修改权限后要将文件改回只读,否则悲剧啦啦啦,我们一票人死在这点上好多次)
(2)在root ALL =(ALL) ALL 下面添加如下文字:
hadoop ALL = (ALL) ALL
如图7
-----------------------------------------------------------------------------
http://dl.iteye.com/upload/attachment/474388/6f352ed1-d565-36b9-8ff4-07f0c3c113e3.png
----------------------------------------------------------------------------------------------
(/etc/sudoers文件是用于sudo命令执行时审核执行权限用的)
执行命令:$:sudo chown hadoop /usr/local/hadoop(将hadoop文件夹的权限赋给hadoop用户)
安装ssh(需联网):(了解ssh:http://freewxy.iteye.com/blog/910820)
1、安装openssh_server:如图8
http://dl.iteye.com/upload/attachment/474392/a701e03a-86a3-31de-a080-4a2861cb0f99.png
2、创建ssh-key,为rsa,如图9
http://dl.iteye.com/upload/attachment/474394/0109f2f7-90dc-353c-86a5-9dbf6272e093.png
填写key的保存路径,如图10填写
http://dl.iteye.com/upload/attachment/474396/9ae6f97e-c170-35e8-b62b-bf1e34ebdbf4.png
3、添加ssh-key到受信列表,并启用此ssh-key,如图11
http://dl.iteye.com/upload/attachment/474398/8b6adb7b-0139-33fe-b984-161e16fe29e8.png
4、验证ssh的配置,如图12
http://dl.iteye.com/upload/attachment/474400/94d73773-a058-3aa1-a8c3-f2f8c9c9d6f6.png
配置hadoop
0、浏览hadoop文件下都有些什么东西,如图13
http://dl.iteye.com/upload/attachment/474402/5ba00cb5-0de1-346f-a15a-c4d5cb3d680c.png
1、打开conf/hadoop-env.sh,如图14
http://dl.iteye.com/upload/attachment/474403/7d0df60c-5256-3dce-aab9-5fd2f0cc3ab7.png
配置conf/hadoop-env.sh(找到#export JAVA_HOME=...,去掉#,然后加上本机jdk的路径),如图15
---------------------------------------------------------------------------------------------
http://dl.iteye.com/upload/attachment/474407/2da82865-8769-36aa-9d99-86e2f10b014b.png
--------------------------------------------------------------------------------------
2、打开conf/core-site.xml
配置,如下内容:
<configuration> <property><name>fs.default.name</name><value>hdfs://localhost:9000</value></property> <property><name>dfs.replication</name> <value>1</value></property> <property><name>hadoop.tmp.dir</name><value>/home/hadoop/tmp</value></property></configuration>
3、打开conf目录下的mapred-site.xml
配置如下内容:
<configuration> <property> <name>mapred.job.tracker</name><value>localhost:9001</value></property></configuration>
运行测试:
1、改变用户,格式化namenode,如图18
http://dl.iteye.com/upload/attachment/474409/5b03a103-3b57-31c6-a602-b2a6733d27d0.png
可能遇到如下错误(倒腾这个过程次数多了),如图19
http://dl.iteye.com/upload/attachment/474411/efe6b530-f705-3ce4-b54f-e49587f9b098.png
执行如图20,再次执行如图18
http://dl.iteye.com/upload/attachment/474413/e58cbd7b-4f0c-38a3-b782-ac886878c766.png
2、启动hadoop,如图21
http://dl.iteye.com/upload/attachment/474415/d7417502-8e7e-3bd6-b3fe-2a83c69a3c48.png
3、验证hadoop是否成功启动,如图22
http://dl.iteye.com/upload/attachment/474417/0afd31a8-b2c6-3ad7-81d8-fecd5c665d0a.png
运行自带wordcount例子(jidong啊)
1、准备需要进行wordcount的文件,如图23(在test.txt中随便输入字符串,保存并退出)
http://dl.iteye.com/upload/attachment/474419/cef00ebc-7aa5-3591-8607-44fa63abf8a0.png
-------------------------------------------------------------------------------------------
2、将上一步中的测试文件上传到dfs文件系统中的firstTest目录下,如图24(如果dfs下不包含firstTest目录的话自动创建一个同名目录,使用命令:bin/hadoop dfs -ls查看dfs文件系统中已有的目录)
http://dl.iteye.com/upload/attachment/474422/c9e9f604-e568-357d-933b-ea637d90ac12.png
3、执行wordcount,如图25(对firstest下的所有文件执行wordcount,将统计结果输出到result文件夹中,若result文件夹不存在则自动创建)
http://dl.iteye.com/upload/attachment/474424/5cab5d4b-ff48-3755-b6a0-687a6b10002b.png
4、查看结果,如图26
http://dl.iteye.com/upload/attachment/474426/e684346a-e5b2-3e57-8d31-ad173ba21fd9.png
单机版搞定~~
参考+修订:http://vampire1126.iteye.com/blog/891693
页:
[1]