Table of Contents |
---|
...
Info | ||
---|---|---|
| ||
If doing a local install or doing an install within a VPC which uses its own repositories, you'll want to run the following to skip installing the public repository. This assumes you are managing repositories yourself.
|
Once finished, the install server has started a web server on 7180, so tunnel that port to a local port (eg 7180!) using ssh and visit that "localhost" page on your local browser (or access the page directly if you are directly connected to the node).
User interface phase - installation
Login using admin/admin
In the first page select "Cloudera Express" and "Continue" twice (until you get to the "Specify hosts for your CDH cluster installation")
Add the hostnames for the nodes you want to add to the cluster, "Search" and "Continue" (assuming the right hostnames appeared)
On the "Cluster Installation" page, accept the defaults and "Continue".
Info | ||
---|---|---|
| ||
When installing in a VPC, both the parcels repo and cloudera-manager repos are local. They need to be configured as follows. The first step is setting your parcels repository. Remove all the remote parcels below and add ours. To get to this screen click the "More Options" button next to "Use Parcels". The other thing that needs to change is the cloudera-manager repository. GPG Key is not configured as of writing this. |
On the page after that, select the "Install Oracle Java SE Development Kit" option and "Continue".
On the page after that, ignore the "Single User Mode" option and "Continue".
The next page requires you to select the SSH login credentials. For most systems, this involves uploading an ssh key. Don't forget to set the key passphrase if one is specified. Only 1 simultaneous installation need be specified.
...
If the JDK download fails early on in the process, simply run the following:
And then try again Another common issue is the DB failing to start with an error like "pg_ctl: could not start server". This is normally because a different process has created the file "/var/lock/postgresql", and the "cloudera-scm-user" does not have permission to write into it. Simply delete the file and try again. |
Once finished, the install server has started a web server on 7180, so tunnel that port to a local port (eg 7180!) using ssh and visit that "localhost" page on your local browser (or access the page directly if you are directly connected to the node).
User interface phase - installation
Login using admin/admin
In the first page select "Cloudera Express" and "Continue" twice (until you get to the "Specify hosts for your CDH cluster installation")
Add the hostnames for the nodes you want to add to the cluster, "Search" and "Continue" (assuming the right hostnames appeared)
On the "Cluster Installation" page, accept the defaults and "Continue".
Info | ||
---|---|---|
| ||
When installing in a VPC, both the parcels repo and cloudera-manager repos are local. They need to be configured as follows. The first step is setting your parcels repository. Remove all the remote parcels below and add ours. To get to this screen click the "More Options" button next to "Use Parcels". The other thing that needs to change is the cloudera-manager repository. GPG Key is not configured as of writing this. |
On the page after that, select the "Install Oracle Java SE Development Kit" option and "Continue".
On the page after that, ignore the "Single User Mode" option and "Continue".
The next page requires you to select the SSH login credentials. For most systems, this involves uploading an ssh key. Don't forget to set the key passphrase if one is specified. Only 1 simultaneous installation need be specified.
Info | ||
---|---|---|
| ||
In the VPCs which run on Centos6, you would use root here. ec2-user is a amazon linux thing. |
"Continue" on, which will start the installation. Once that is done "Continue" again, to move to another automatic installation page ("Installing Selected Parcels"). "Continue" once that is done.
Warning |
---|
For Amazon installs on nodes that use Amazon's built-in DNS but are configured with a Route 53 (or similar) hostname, Cloudera reports the installs as failing on the heartbeat - just ignore this and carry on. When you get to the role assignment page, it will only let you assign to the "Cloudera Manager" node. At that point in a new tab:
Once everything is installed, you will likely see some "Java hostname consistency" errors. These can be suppressed in the Manager app by following the links. |
The next page is the "Host inspector" - this page will provide warnings and errors. The following warnings can be ignored:
- "Cloudera recomments settings /proc/sys/vm/swappiness to 0"
- "There are mismatched versions across the system, which will cause failures. See below for details on which hosts are running what versions of components"
- (this just refers to Java)
- "Cloudera supports versions 1.6.0_31 and 1.7.0_55 of Oracle's JVM and later. OpenJDK is not supported, and gcj is known to not work. Check the component version table below to identify hosts with unsupported versions of Java."
...
Using the "Search" bar to find them, the following configuration settings should be modified
- Change "Number of Tasks to Run per JVM" to -1
- Set "MapReduce Service Environment Advanced Configuration Snippet (Safety Valve)" to
- JAVA_HOME="/usr/java/default/jre/"
- Find "MapReduce Child Java Opts Base" and append "-Djava.security.policy=/opt/infinite-home/config/security.policy" after (the already present) "-Djava.net.preferIPv4Stack=true" (with a space between them)
- Search for "Simultaneous" and set (eg) "Maximum Number of Simultaneous Map Tasks" to 2 and "Maximum Number of Simultaneous Reduce Tasks" to 1
- (on larger instances than the typical 15GB instances, for heavy batch analytics use, this can be increased)
Then select the "Save Changes" button. This brings up two "Stale Configuration" notifications in the top left:
...
Code Block |
---|
runuser hdfs -c "hadoop fs -mkdir /user/" runuser hdfs -c "hadoop fs -mkdir /user/tomcat" runuser hdfs -c "hadoop fs -chmod a+w /user/tomcat" |
Warning |
---|
Finally note that the CDH5 install has on occasion (!) stopped the "iptables" service, which is used to redirect from 8080 to 80 - this causes the cluster to lose external connectivity. To fix this, simply run "service iptables start" on all API nodes. |
Installing CDH5 (YARN)
IKANOW is not yet integrated with CDH5 running YARN. Once it is, this section will explain how to:
...