Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 15 Next »

Overview

Version 0.5 of IKANOW requires that CDH5 be installed. This will enable applications such as Storm and Spark to be run on the YARN framework.

Unfortunately CDH5 and CDH3 are very non-backwards compatible with one another (*) - therefore CDH5 must be installed before IKANOW v0.5. This is enforced by requiring that hadoop-installer-v0.5 be present before infinit.e-processing-engine-v0.5 can be updated (and hence also infinit.e-interface-engine-v0.5).

(*) Except that the MapReduce JARs themselves do not need to be recompiled.

This page describes the steps necessary to do this. If installing CDH5 on a new IKANOW install, the "Upgrading from CDH3" step can be ignored.

This documentation describes how to get set up with MRv1 running under CDH5.

The IKANOW platform is not currently compatible with YARN - but moving between MRv1 and YARN requires no significant re-installation, just shutdown MRv1 and start YARN in the Cloudera Manager.

Upgrading from CDH3

It is not possible to upgrade from CDH3. Instead CDH3 must be removed and then CDH5 installed.

All jobs must be stopped before this occurs - create the empty file "/opt/infinite-home/bin/STOP_CUSTOM" and wait for all jobs to complete in the jobtracker ("ROOT:8090/monitoring/<jobtracker-hostname>/50030/jobtracker.jsp") before continuing.

  • (eg 'sh infinite_run_script_el6.sh <CLUSTER> "touch /opt/infinite-home/bin/STOP_CUSTOM" <HOSTS>' for enterprise users)

All files in HDFS that you don't want to lose must be copied off (eg to S3) and then copied back again when the upgrade is complete.

You should check whether you have any scripts that import data into HDFS and turn them off temporarily.

Then go to the Cloudera Manager and shut down all services. (It is not necessary to remove hosts because this cluster database will be destroyed as part of these steps).

Once the above steps have been taken to stop Hadoop on the cluster, install/upgrade the latest "infinit.e-hadoop-installer" RPM (must be v0.5 or higher - note this also means the machines must have JDK8 installed).

  • (eg "sh infinite_deploy_rpm_el6.sh <CLUSTER> ./infinit.e-hadoop-installer.online-v0.5-* <HOSTS>" for enterprise users)

Next run the script "/opt/hadoop-infinite/scripts/uninstall_cdh3.sh" on each node (API and DB - the script will fail harmlessly if the node does not currently run CDH3)

  • (eg 'sh infinite_run_script_el6.sh <CLUSTER> "sh /opt/hadoop-infinite/scripts/uninstall_cdh3.sh" <HOSTS>' for enterprise users)

This completes the uninstall of CDH3.

Installing CDH5 (MRv1)

Command line phase

If you did not already do as a step under "Upgrading from CDH3", install/upgrade the latest "infinit.e-hadoop-installer" RPM (must be v0.5 or higher).

On each node in the cluster (API and DB nodes - regardless of whether Hadoop/HDFS will actually be running - BUT SEE INFO BOX BELOW), run the "/opt/hadoop-infinite/scripts/online_install.sh" script.

  • (eg 'sh infinite_run_script_el6.sh <CLUSTER> "sh /opt/hadoop-infinite/scripts/online_install.sh" <HOSTS>' for enterprise users)

If you are not going to be including the node in the cluster install, then run

sh /opt/hadoop-infinite/scripts/online_install.sh partial

ie adding the argument "partial" - otherwise the symbolic link "/opt/hadoop-infinite/lib" will point to the wrong place. If you do not do this, and subsequently decide not to add the node to the cluster, then it it can be fixed with

rm -f /opt/hadoop-infinite/lib; ln -sf /opt/hadoop-infinite/standalone_lib /opt/hadoop-infinite/lib

Finally for the "command line phase", select a node on which to run the Cloudera Manager server, and run "sh /opt/hadoop-infinite/scripts/online_install.sh full" on that server in an interactive console. Select "<Next>/<Yes>/<OK>" whenever prompted - this console UI has no options.

Installing with local repos in a VPC

If doing a local install or doing an install within a VPC which uses its own repositories, you'll want to run the following to skip installing the public repository. This assumes you are managing repositories yourself.

/opt/hadoop-infinite/cloudera-manager-installer.bin --skip_repo_package=1

Once finished, the install server has started a web server on 7180, so tunnel that port to a local port (eg 7180!) using ssh and visit that "localhost" page on your local browser (or access the page directly if you are directly connected to the node).

User interface phase - installation

Login using admin/admin

In the first page select "Cloudera Express" and "Continue" twice (until you get to the "Specify hosts for your CDH cluster installation")

Add the hostnames for the nodes you want to add to the cluster, "Search" and "Continue" (assuming the right hostnames appeared)

For Amazon installs on nodes that use Amazon's built-in DNS but are configured with a Route 53 hostname, it is not currently possible to install Cloudera. The following options are possible:

  • Set the hostname back to its' internal one, eg "ip-<ip-address-with-dashes-not-dots>.ec2.internal"
  • Set up a local named/bind server (and update /etc/resolv.conf etc) to "proxy" reverse lookups
    • (this is the preferred option)

 

On the "Cluster Installation" page, accept the defaults and "Continue". 

Installing in a VPC

When installing in a VPC, both the parcels repo and cloudera-manager repos are local. They need to be configured as follows.

The first step is setting your parcels repository. Remove all the remote parcels below and add ours. To get to this screen click the "More Options" button next to "Use Parcels".

The other thing that needs to change is the cloudera-manager repository.

GPG Key is not configured as of writing this.

 

 

 

On the page after that, select the "Install Oracle Java SE Development Kit" option and "Continue".

On the page after that, ignore the "Single User Mode" option and "Continue".

The next page requires you to select the SSH login credentials. For most systems, this involves uploading an ssh key. Don't forget to set the key passphrase if one is specified. Only 1 simultaneous installation need be specified.

VPC and EL6

In the VPCs which run on Centos6, you would use root here. ec2-user is a amazon linux thing.

 

"Continue" on, which will start the installation. Once that is done "Continue" again, to move to another automatic installation page ("Installing Selected Parcels"). "Continue" once that is done.

The next page is the "Host inspector" - this page will provide warnings and errors. The following warnings can be ignored:

  • "Cloudera recomments settings /proc/sys/vm/swappiness to 0"
  • "There are mismatched versions across the system, which will cause failures. See below for details on which hosts are running what versions of components"
    • (this just refers to Java)
  • "Cloudera supports versions 1.6.0_31 and 1.7.0_55 of Oracle's JVM and later. OpenJDK is not supported, and gcj is known to not work. Check the component version table below to identify hosts with unsupported versions of Java."

Other reported issues may cause problems.

If there are no unexpected issues, then hit "Finish". This will take you to the configuration phase described below.

In my experience if you have to go "Back" at any point, or have to refresh the browser at any stage, then the install as a whole should be considered compromised. If this occurs then run "/opt/hadoop-infinite/scripts/uninstall_cd5.sh" on all nodes and then restart the install from the beginning.

User interface phase - setup

Select Custom Services in the "Cluster Setup" page (last option):

Select the following services and "Continue":

  • HDFS
  • MapReduce
  • ZooKeeper

The next page lets you control role assignments:

It is recommended to assign the "master" roles (NameNode, SecondaryNameNode, Balancer, HttpFS, JobTracker, all the "Cloudera Management Service" roles) to DB nodes (which have more flexible memory handling), and to balance them out across the available DB nodes as much as possible, to minimize the load on any one machine. (By default all the "master" roles are placed on the same server).

This page also lets you decide which nodes to run TaskTracker and DataNode roles (TaskTracker is needed to run a Map/Reduce job, and DataNode is for the HDFS distributed file system) - eg DB nodes only or API and DB nodes. We recommend installing on both API and DB nodes - if the API nodes prove to be overloaded, or you are not using Hadoop for heavy duty batch processing, you can always just stop the services on those nodes after installation.

For example in the above screenshot, it would be better to specify "ip-10-60-18-179.ec2.internal" as the SecondaryNameNode, the HttpFS, and 3 of the Management Services. This will balance the processing across the 2 nodes, as shown by this screenshot:

Once you have balanced the role assignments, press "Continue".

On the next page, use the "Embedded Database" (the default), "Test Connection", and then "Continue" once that is complete:

User interface phase - configuration

The next set of pages configure the various services and roles.

Firstly a set of directories. These should each have a single directory and it should read "/mnt" (the Hadoop installer will create symlinks for "/raidarray" if that is present). Sometimes the installer will insert "/dbarray" as a second directory - this is undesirable, so review the page carefully and use the "-" button to remove these suggestions. 

When done "Continue" to the next page, and automated installation page. When that has completed, "Continue" again and "Finish" after accepting the Web pages' congratulations. However you are not quite done yet.

After selecting, "Finish" you are taken to the main monitoring/management page. You may see red "Heath Issues" - it is worth checking these, but they are mainly "low free space" warnings for log directories (the "infinit.e-hadoop-installer" has quite aggressive log rotate/delete schedules, and the root partition is not used for much, so these can be ignored.)

Select the MapReduce service from the home page, and then the Configuration tab:

Using the "Search" bar to find them, the following configuration settings should be modified

  • Change "Number of Tasks to Run per JVM" to -1
  • Set "MapReduce Service Environment Advanced Configuration Snippet (Safety Valve)" to 
    • JAVA_HOME="/usr/java/default/jre/"
  • Find "MapReduce Child Java Opts Base" and append  "-Djava.security.policy=/opt/infinite-home/config/security.policy" after (the already present) "-Djava.net.preferIPv4Stack=true" (with a space between them)
  • Search for "Simultaneous" and set (eg) "Maximum Number of Simultaneous Map Tasks" to 2 and "Maximum Number of Simultaneous Reduce Tasks" to 1
    • (on larger instances than the typical 15GB instances, for heavy batch analytics use, this can be increased)

Then select the "Save Changes" button. This brings up two "Stale Configuration" notifications in the top left:

(the two icons to the left of the orange circle)

Select the second of those "Client configuration redeployment needed", review the changes and select the "Restart" cluster button in the top right:

(and then the "Restart Now" button on the next page - ignore the "Rolling Restart" and leave the "Re-deploy client configuration" checked)

Final steps

Before leaving the UI, go back to the "MapReduce" service, and select (from the "Actions" button in the top right) "Download Client Configuration".

Returning to the command line:

  • Copy the contents of the downloaded zip (the files in its "hadoop-conf" directory) to the "/opt/hadoop-infinite/mapreduce/hadoop" directory of each API node)
    • (eg "sh hadoop_config_deploy_el6.sh <CLUSTER-NAME> ~/Downloads/mapreduce-clientconfig.zip <API HOSTS>" for enterprise users) 
  • Upgrade the API nodes to v0.5 (if not already done - restart the "tomcat6-interface-engine" nodes if v0.5 is already installed)
  • Remove the "/opt/infinite-home/bin/STOP_CUSTOM"
    • (eg 'sh infinite_run_script_el6.sh <CLUSTER> "rm -f/opt/infinite-home/bin/STOP_CUSTOM" <HOSTS>' for enterprise users)

Finally, the "tomcat" user directory should be added and made world writable (from any of the nodes):

runuser hdfs -c "hadoop fs -mkdir /user/"
runuser hdfs -c "hadoop fs -mkdir /user/tomcat"
runuser hdfs -c "hadoop fs -chmod a+w /user/tomcat"

Installing CDH5 (YARN)

IKANOW is not yet integrated with CDH5 running YARN. Once it is, this section will explain how to:

  • Install CDH5 running YARN from scratch
  • Move from CDH5 MRv1 to CDH5 YARN.
  • No labels