Table of Contents |
---|
...
On each node in the cluster (API and DB nodes - regardless of whether Hadoop/HDFS will actually be running - BUT SEE INFO BOX BELOW), run the "/opt/hadoop-infinite/scripts/online_install.sh" script.
- (eg 'sh infinite_run_script_el6.sh <CLUSTER> "sh /opt/hadoop-infinite/scripts/online_install.sh" <HOSTS>' for enterprise users)
Info | ||||
---|---|---|---|---|
If you are not going to be including the node in the cluster install, then run
ie adding the argument "partial" - otherwise the symbolic link "/opt/hadoop-infinite/lib" will point to the wrong place. If you do not do this, and subsequently decide not to add the node to the cluster, then it it can be fixed with
|
Finally for the "command line phase", select a node on which to run the Cloudera Manager server, and run "sh /opt/hadoop-infinite/scripts/online_install.sh full" on that server in an interactive console. Select "<Next>/<Yes>/<OK>" whenever prompted - this console UI has no options.
Info | ||||
---|---|---|---|---|
| ||||
If doing a local install or doing an install within a VPC which uses its own repositories, you'll want to run the following to skip installing the public repository. This assumes you are managing repositories yourself.
If the JDK download fails early on in the process, simply run the following:
And then try again Another common issue is the DB failing to start with an error like "pg_ctl: could not start server". This is normally because a different process has created the file "/var/lock/postgresql", and the "cloudera-scm-user" does not have permission to write into it. Simply delete the file and try again. |
Once finished, the install server has started a web server on 7180, so tunnel that port to a local port (eg 7180!) using ssh and visit that "localhost" page on your local browser (or access the page directly if you are directly connected to the node).
...
On the "Cluster Installation" page, accept the defaults and "Continue".
Info | ||
---|---|---|
| ||
When installing in a VPC, both the parcels repo and cloudera-manager repos are local. They need to be configured as follows. The first step is setting your parcels repository. Remove all the remote parcels below and add ours. To get to this screen click the "More Options" button next to "Use Parcels". The other thing that needs to change is the cloudera-manager repository. GPG Key is not configured as of writing this. |
On the page after that, select the "Install Oracle Java SE Development Kit" option and "Continue".
...
The next page requires you to select the SSH login credentials. For most systems, this involves uploading an ssh key. Don't forget to set the key passphrase if one is specified. Only 1 simultaneous installation need be specified.
Info | ||
---|---|---|
| ||
In the VPCs which run on Centos6, you would use root here. ec2-user is a amazon linux thing. |
"Continue" on, which will start the installation. Once that is done "Continue" again, to move to another automatic installation page ("Installing Selected Parcels"). "Continue" once that is done.
Warning |
---|
For Amazon installs on nodes that use Amazon's built-in DNS but are configured with a Route 53 (or similar) hostname, Cloudera reports the installs as failing on the heartbeat - just ignore this and carry on. When you get to the role assignment page, it will only let you assign to the "Cloudera Manager" node. At that point in a new tab:
Once everything is installed, you will likely see some "Java hostname consistency" errors. These can be suppressed in the Manager app by following the links. |
The next page is the "Host inspector" - this page will provide warnings and errors. The following warnings can be ignored:
- "Cloudera recomments settings /proc/sys/vm/swappiness to 0"
- "There are mismatched versions across the system, which will cause failures. See below for details on which hosts are running what versions of components"
- (this just refers to Java)
- "Cloudera supports versions 1.6.0_31 and 1.7.0_55 of Oracle's JVM and later. OpenJDK is not supported, and gcj is known to not work. Check the component version table below to identify hosts with unsupported versions of Java."
...
The next set of pages configure the various services and roles.
TODO
TODOFirstly a set of directories. These should each have a single directory and it should read "/mnt" (the Hadoop installer will create symlinks for "/raidarray" if that is present). Sometimes the installer will insert "/dbarray" as a second directory - this is undesirable, so review the page carefully and use the "-" button to remove these suggestions.
When done "Continue" to the next page, and automated installation page. When that has completed, "Continue" again and "Finish" after accepting the Web pages' congratulations. However you are not quite done yet.
After selecting, "Finish" you are taken to the main monitoring/management page. You may see red "Heath Issues" - it is worth checking these, but they are mainly "low free space" warnings for log directories (the "infinit.e-hadoop-installer" has quite aggressive log rotate/delete schedules, and the root partition is not used for much, so these can be ignored.)
Select the MapReduce service from the home page, and then the Configuration tab:
Using the "Search" bar to find them, the following configuration settings should be modified
- Change "Number of Tasks to Run per JVM" to -1
- Set "MapReduce Service Environment Advanced Configuration Snippet (Safety Valve)" to
- JAVA_HOME="/usr/java/default/jre/"
- Find "MapReduce Child Java Opts Base" and append "-Djava.security.policy=/opt/infinite-home/config/security.policy" after (the already present) "-Djava.net.preferIPv4Stack=true" (with a space between them)
- Search for "Simultaneous" and set (eg) "Maximum Number of Simultaneous Map Tasks" to 2 and "Maximum Number of Simultaneous Reduce Tasks" to 1
- (on larger instances than the typical 15GB instances, for heavy batch analytics use, this can be increased)
Then select the "Save Changes" button. This brings up two "Stale Configuration" notifications in the top left:
(the two icons to the left of the orange circle)
Select the second of those "Client configuration redeployment needed", review the changes and select the "Restart" cluster button in the top right:
(and then the "Restart Now" button on the next page - ignore the "Rolling Restart" and leave the "Re-deploy client configuration" checked)
Final steps
Before leaving the UI, go back to the "MapReduce" service, and select (from the "Actions" button in the top right) "Download Client Configuration".
Returning to the command line:
- Copy the contents of the downloaded zip (the files in its "hadoop-conf" directory) to the "/opt/hadoop-infinite/mapreduce/hadoop" directory of each API node)
- (eg "sh hadoop_config_deploy_el6.sh <CLUSTER-NAME> ~/Downloads/mapreduce-clientconfig.zip <API HOSTS>" for enterprise users)
- Upgrade the API nodes to v0.5 (if not already done - restart the "tomcat6-interface-engine" nodes if v0.5 is already installed)
- Remove the "/opt/infinite-home/bin/STOP_CUSTOM"
- (eg 'sh infinite_run_script_el6.sh <CLUSTER> "rm -f/opt/infinite-home/bin/STOP_CUSTOM" <HOSTS>' for enterprise users)
Finally, the "tomcat" user directory should be added and made world writable (from any of the nodes):
Code Block |
---|
runuser hdfs -c "hadoop fs -mkdir /user/"
runuser hdfs -c "hadoop fs -mkdir /user/tomcat"
runuser hdfs -c "hadoop fs -chmod a+w /user/tomcat" |
Warning |
---|
Finally note that the CDH5 install has on occasion (!) stopped the "iptables" service, which is used to redirect from 8080 to 80 - this causes the cluster to lose external connectivity. To fix this, simply run "service iptables start" on all API nodes. |
Installing CDH5 (YARN)
IKANOW is not yet integrated with CDH5 running YARN. Once it is, this section will explain how to:
...