- 浏览: 341500 次
- 性别:
- 来自: 上海
文章分类
最新评论
-
tpxcer:
不开启时可以的,而且开启以后各种坑。。。。
hue beeswax权限管理 -
yangze:
博主请教一个问题,hue 控制hive表的权限怎么弄? 怎么联 ...
cloudera新增用户权限配置 -
linux91:
楼主你好,我用CM配置LDAP用户组映射,进入impala时, ...
sentry配置 -
linux91:
版主:按你的步骤配置了,可是,执行 impala-shell ...
impala集成LDAP -
lookqlp:
super_a 写道你好!找不到表这个问题是如何解决的,可以描 ...
hcatalog读取hive数据并写入hive
解压hadoop-2.2.0.tar.gz
目录说明:
drwxr-xr-x 2 qiulp qiulp 4096 Oct 22 11:37 bin/ ......hadoop命令及yarn命令
drwxr-xr-x 3 qiulp qiulp 4096 Oct 7 14:38 etc/ ......site xml配置文件
drwxr-xr-x 2 qiulp qiulp 4096 Oct 7 14:38 include/
drwxr-xr-x 2 qiulp qiulp 4096 Oct 22 11:40 sbin/ ......启动命令
drwxr-xr-x 4 qiulp qiulp 4096 Oct 7 14:38 share/ ......jar 源码(example jar)
配置hadoop jdk环境变量
修改etc/hadoop/hadoop-env.sh yarn-env.sh javahome例如:export JAVA_HOME=/usr/local/jrockit-jdk1.6.0_29
修改etc/hadoop/slaves文件,单点则直接配置该机器hostname
单机无密码登录
修改xml
core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://qiulp:9010</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
</property>
.....................
hdfs-site.xml
<property>
<name>dfs.name.dir</name>
<value>/usr/local/hadoop/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/usr/local/hadoop/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
....................
mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>-----指定采用的框架名称yarn 有local和classic默认事jobtracker即mrv1
</property>
<property>
<name>mapreduce.cluster.temp.dir</name>
<value>/usr/local/hadoop/ctmp/</value>
<description>No description</description>
<final>true</final>
</property>
<property>
<name>mapreduce.cluster.local.dir</name>
<value>/usr/local/hadoop/clocal</value>
<description>No description</description>
<final>true</final>
</property>
........................
yarn-site.xml
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>qiulp:8031</value>
<description>host is the hostname of the resource manager and
port is the port on which the NodeManagers contact the Resource Manager.
</description>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>qiulp:8030</value>
<description>host is the hostname of the resourcemanager and port is the port
on which the Applications in the cluster talk to the Resource Manager.
</description>
</property>
<property>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
<description>In case you do not want to use the default scheduler</description>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>qiulp:8032</value>
<description>the host is the hostname of the ResourceManager and the port is the port on
which the clients can talk to the Resource Manager. </description>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value></value>
<description>the local directories used by the nodemanager</description>
</property>
<property>
<name>yarn.nodemanager.address</name>
<value>qiulp:0</value>
<description>the nodemanagers bind to this port</description>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>10240</value>
<description>the amount of memory on the NodeManager in GB</description>
</property>
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/app-logs</value>
<description>directory on hdfs where the application logs are moved to </description>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value></value>
<description>the directories used by Nodemanagers as log directories</description>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
<description>shuffle service that needs to be set for Map Reduce to run </description>
</property>
<property>
<name>yarn.web-proxy.address</name>
<value>qiulp:8038</value>
</property>
................................
capacity-scheduler.xml
使用默认即可
执行命令
hadoop namenode -format
(正常情况下直接成功,没有提示输入y or n,若不成功共删除相关文件,例如/usr/local/hadoop下文件清空)
启动:
sbin/start-all.sh
5451 NodeManager
5033 SecondaryNameNode
5226 ResourceManager
4516 NameNode
4735 DataNode
Start a standalone WebAppProxy server. If multiple servers are used with load balancing it should be run on each of them:
$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh start proxyserver --config $HADOOP_CONF_DIR
Start the MapReduce JobHistory Server with the following command, run on the designated server:
$ $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh start historyserver --config $HADOOP_CONF_DIR
7540 WebAppProxyServer
7628 JobHistoryServer
JobHistoryServer开启后可查看历史任务日志http://qiulp:19888/jobhistory
相关web Interfaces
NameNode http://nn_host:port/ Default HTTP port is 50070.
ResourceManager http://rm_host:port/ Default HTTP port is 8088.
MapReduce JobHistory Server http://jhs_host:port/ Default HTTP port is 19888.
目录说明:
drwxr-xr-x 2 qiulp qiulp 4096 Oct 22 11:37 bin/ ......hadoop命令及yarn命令
drwxr-xr-x 3 qiulp qiulp 4096 Oct 7 14:38 etc/ ......site xml配置文件
drwxr-xr-x 2 qiulp qiulp 4096 Oct 7 14:38 include/
drwxr-xr-x 2 qiulp qiulp 4096 Oct 22 11:40 sbin/ ......启动命令
drwxr-xr-x 4 qiulp qiulp 4096 Oct 7 14:38 share/ ......jar 源码(example jar)
配置hadoop jdk环境变量
修改etc/hadoop/hadoop-env.sh yarn-env.sh javahome例如:export JAVA_HOME=/usr/local/jrockit-jdk1.6.0_29
修改etc/hadoop/slaves文件,单点则直接配置该机器hostname
单机无密码登录
修改xml
core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://qiulp:9010</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
</property>
.....................
hdfs-site.xml
<property>
<name>dfs.name.dir</name>
<value>/usr/local/hadoop/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/usr/local/hadoop/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
....................
mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>-----指定采用的框架名称yarn 有local和classic默认事jobtracker即mrv1
</property>
<property>
<name>mapreduce.cluster.temp.dir</name>
<value>/usr/local/hadoop/ctmp/</value>
<description>No description</description>
<final>true</final>
</property>
<property>
<name>mapreduce.cluster.local.dir</name>
<value>/usr/local/hadoop/clocal</value>
<description>No description</description>
<final>true</final>
</property>
........................
yarn-site.xml
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>qiulp:8031</value>
<description>host is the hostname of the resource manager and
port is the port on which the NodeManagers contact the Resource Manager.
</description>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>qiulp:8030</value>
<description>host is the hostname of the resourcemanager and port is the port
on which the Applications in the cluster talk to the Resource Manager.
</description>
</property>
<property>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
<description>In case you do not want to use the default scheduler</description>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>qiulp:8032</value>
<description>the host is the hostname of the ResourceManager and the port is the port on
which the clients can talk to the Resource Manager. </description>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value></value>
<description>the local directories used by the nodemanager</description>
</property>
<property>
<name>yarn.nodemanager.address</name>
<value>qiulp:0</value>
<description>the nodemanagers bind to this port</description>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>10240</value>
<description>the amount of memory on the NodeManager in GB</description>
</property>
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/app-logs</value>
<description>directory on hdfs where the application logs are moved to </description>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value></value>
<description>the directories used by Nodemanagers as log directories</description>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
<description>shuffle service that needs to be set for Map Reduce to run </description>
</property>
<property>
<name>yarn.web-proxy.address</name>
<value>qiulp:8038</value>
</property>
................................
capacity-scheduler.xml
使用默认即可
执行命令
hadoop namenode -format
(正常情况下直接成功,没有提示输入y or n,若不成功共删除相关文件,例如/usr/local/hadoop下文件清空)
启动:
sbin/start-all.sh
5451 NodeManager
5033 SecondaryNameNode
5226 ResourceManager
4516 NameNode
4735 DataNode
Start a standalone WebAppProxy server. If multiple servers are used with load balancing it should be run on each of them:
$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh start proxyserver --config $HADOOP_CONF_DIR
Start the MapReduce JobHistory Server with the following command, run on the designated server:
$ $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh start historyserver --config $HADOOP_CONF_DIR
7540 WebAppProxyServer
7628 JobHistoryServer
JobHistoryServer开启后可查看历史任务日志http://qiulp:19888/jobhistory
相关web Interfaces
NameNode http://nn_host:port/ Default HTTP port is 50070.
ResourceManager http://rm_host:port/ Default HTTP port is 8088.
MapReduce JobHistory Server http://jhs_host:port/ Default HTTP port is 19888.
发表评论
-
hive相关元数据迁移(mysql)
2015-11-18 18:27 2516mysqldump -hhost -uroot -ppassw ... -
hive dynamic partitions insert java.lang.OutOfMemoryError: Java heap space
2015-10-26 18:03 3023动态分区问题,如果数据量大或者当动态分区大甚至只有十几个时 ... -
hive集成LDAP
2015-02-13 10:09 9674cloudera manager hive- sevice ... -
sentry配置
2015-02-13 10:06 2292当前cdh版本为5.2.0,且通过cloudera mange ... -
hue beeswax权限管理
2014-08-05 17:54 10535http://www.cloudera.com/content ... -
cloudera client集群部署
2014-08-05 17:48 593一般我们使用使用client机器访问集群,而不会直接在hado ... -
cloudera manager kerberos配置
2014-08-05 17:37 1502CDH5.1.0前的版本,可以通过cloudera manag ... -
CDH5安装
2014-08-05 17:05 2237CDH安装有很多方式: ta ... -
hadoop集群数据迁移
2014-08-04 22:31 6508hadoop distcp hdfs://namenode1/ ... -
java.lang.OutOfMemoryError: unable to create new native thread
2014-05-23 17:29 163835227 2014-05-21 13:53:18,504 I ... -
hadoop-2.2.0编译import eclipse
2013-10-22 17:50 7750编译hadoop-2.2.0 下载hadoop-2.2.0-s ... -
oozie 安装
2013-09-29 18:22 7404废话不多说,直接步骤 安装 准备: oozie-3.3.2,o ... -
mapreduce java.lang.ClassNotFoundException:
2013-03-07 15:06 2977好久没写mr了,今天写了个在eclipse上运行很顺畅,但是使 ... -
ganglia监控hadoop各指标说明
2012-11-20 14:04 1576做个mark! 监控指标大致如下: default.shu ... -
关于mapreduce解析xml的方法
2012-03-29 11:52 1697mapreduce的TextInputFormat很方便的处理 ... -
如何在mapreduce方法中获取当前使用文件(get file name)
2012-03-29 11:42 1824使用的0.20.2版本hadoop 查了许久,如何在map方法 ... -
如何提示mapreduce,查看systemout信息
2012-01-14 14:47 1221又折腾了大半天,只解决了一半的问题吧。 已经解决部分: 可以通 ... -
wordcount
2012-01-11 17:40 844window eclipse开发环境下 运行wordcout ... -
hbase Waiting on regionserver(s) to checkin
2012-01-11 12:16 2474hbase启动不来,一直checkin。 查看日志含有: or ... -
hadoop format后启动不了
2012-01-11 12:13 889format后启动不了,可能是个个节点包括name节点的had ...
相关推荐
此hadoop是hadoop-2.2.0是32位的编译出来的,亲测可用
hadoop-common-2.2.0-bin-master 用于widows本地hadoop hava api开发
自己编译的64bithadoop-2.2.0版本 [INFO] Reactor Summary: ...This command was run using /home/hadoop/Desktop/hadoop-2.2.0-src/hadoop-dist/target/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar
hadoop-2.2.0 64bit下载,自己编译的 [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Main ................................ SUCCESS [1.834s] [INFO] Apache Hadoop Project POM ...........................
hadoop-2.2.0.tar.gz
hadoop-eclipse-plugin-2.2.0.jar hadoop安装eclipse必备插件,亲测可用,欢迎大家下载,交换下载币,谢谢!
A couple of important points to note while upgrading to hadoop-2.2.0: HDFS - The HDFS community decided to push the symlinks feature out to a future 2.3.0 release and is currently disabled. YARN/...
hadoop-common-2.2.0-bin-master(包含windows端开发Hadoop2.2需要的winutils.exe)
hadoop-common-2.2.0-bin-master(包含windows端开发Hadoop和Spark需要的winutils.exe),Windows下IDEA开发Hadoop和Spark程序会报错,原因是因为如果本机操作系统是windows,在程序中使用了hadoop相关的东西,比如写入...
hadoop-2.2.0-x64.tar.gz 安装包,Linux 64位环境下使用。
hadoop-eclipse-plugin-2.2.0插件 windows/linux用的eclpse插件 支持hadoop-2.2.0
hadoop-2.2.0开发依赖jar包
hadoop-2.2.0-api 用于java 开发hadoop mr 应用
hadoop-common-2.2.0版本 winutils.exe 亲测可用 hadoop-common-2.2.0版本 winutils.exe 亲测可用
[root@master hadoop-2.2.0]# file lib//native/* lib//native/libhadoop.a: current ar archive lib//native/libhadooppipes.a: current ar archive lib//native/libhadoop.so: symbolic link to `libhadoop.so....
hadoop-auth-2.2.0.jar
hadoop-common-2.2.0-bin-32.rarhadoop-common-2.2.0-bin-32.rarhadoop-common-2.2.0-bin-32.rarhadoop-common-2.2.0-bin-32.rarhadoop-common-2.2.0-bin-32.rarhadoop-common-2.2.0-bin-32.rarhadoop-common-2.2.0-...
hadoop本地运行程序需要winutil.exe和hadoop.dll,解决:Could not locate Hadoop executable: D:\sorftware\hadoop\hadoop-2.2.0\bin\winutils.exe问题