Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

Overview

Version 0.5 of IKANOW requires that CDH5 be installed. This will enable applications such as Storm and Spark to be run on the YARN framework.

Unfortunately CDH5 and CDH3 are very non-backwards compatible with one another (*) - therefore CDH5 must be installed before IKANOW v0.5. This is enforced by requiring that hadoop-installer-v0.5 be present before infinit.e-processing-engine-v0.5 can be updated (and hence also infinit.e-interface-engine-v0.5).

(*) Except that the MapReduce JARs themselves do not need to be recompiled.

This page describes the steps necessary to do this. If installing CDH5 on a new IKANOW install, the "Upgrading from CDH3" step can be ignored.

This documentation describes how to get set up with MRv1 running under CDH5.

The IKANOW platform is not currently compatible with YARN - but moving between MRv1 and YARN requires no significant re-installation, just shutdown MRv1 and start YARN in the Cloudera Manager.

Upgrading from CDH3

It is not possible to upgrade from CDH3. Instead CDH3 must be removed and then CDH5 installed.

All jobs must be stopped before this occurs - create the empty file "/opt/infinite-home/bin/STOP_CUSTOM" and wait for all jobs to complete in the jobtracker ("ROOT:8090/monitoring/<jobtracker-hostname>/50030/jobtracker.jsp") before continuing.

  • (eg 'sh infinite_run_script_el6.sh <CLUSTER> "touch /opt/infinite-home/bin/STOP_CUSTOM" <HOSTS>' for enterprise users)

All files in HDFS that you don't want to lose must be copied off (eg to S3) and then copied back again when the upgrade is complete.

You should check whether you have any scripts that import data into HDFS and turn them off temporarily.

Then go to the Cloudera Manager and shut down all services. (It is not necessary to remove hosts because this cluster database will be destroyed as part of these steps).

Once the above steps have been taken to stop Hadoop on the cluster, install/upgrade the latest "infinit.e-hadoop-installer" RPM (must be v0.5 or higher - note this also means the machines must have JDK8 installed).

  • (eg "sh infinite_deploy_rpm_el6.sh <CLUSTER> ./infinit.e-hadoop-installer.online-v0.5-* <HOSTS>" for enterprise users)

Next run the script "/opt/hadoop-infinite/scripts/uninstall_cdh3.sh" on each node (API and DB - the script will fail harmlessly if the node does not currently run CDH3)

  • (eg 'sh infinite_run_script_el6.sh <CLUSTER> "sh /opt/hadoop-infinite/scripts/uninstall_cdh3.sh" <HOSTS>' for enterprise users)

This completes the uninstall of CDH3.

Installing CDH5 (MRv1)

Command line phase

If you did not already do as a step under "Upgrading from CDH3", install/upgrade the latest "infinit.e-hadoop-installer" RPM (must be v0.5 or higher).

On each node in the cluster (API and DB nodes - regardless of whether Hadoop/HDFS will actually be running), run the "/opt/hadoop-infinite/scripts/online_install.sh" script.

  • (eg 'sh infinite_run_script_el6.sh <CLUSTER> "sh /opt/hadoop-infinite/scripts/online_install.sh" <HOSTS>' for enterprise users)

Finally for the "command line phase", select a node on which to run the Cloudera Manager server, and run "sh /opt/hadoop-infinite/scripts/online_install.sh full" on that server in an interactive console. Select "<Next>/<Yes>/<OK>" whenever prompted - this console UI has no options.

Once finished, the install server has started a web server on 7180, so tunnel that port to a local port (eg 7180!) using ssh and visit that "localhost" page on your local browser (or access the page directly if you are directly connected to the node).

User interface phase - installation

Login using admin/admin

In the first page select "Cloudera Express" and "Continue" twice (until you get to the "Specify hosts for your CDH cluster installation")

Add the hostnames for the nodes you want to add to the cluster, "Search" and "Continue" (assuming the right hostnames appeared)

On the "Cluster Installation" page, accept the defaults and "Continue". 

On the page after that, select the "Install Oracle Java SE Development Kit" option and "Continue".

On the page after that, ignore the "Single User Mode" option and "Continue".

The next page requires you to select the SSH login credentials. For most systems, this involves uploading an ssh key. Don't forget to set the key passphrase if one is specified. Only 1 simultaneous installation need be specified.

"Continue" on, which will start the installation. Once that is done "Continue" again, to move to another automatic installation page ("Installing Selected Parcels"). "Continue" once that is done.

The next page is the "Host inspector" - this page will provide warnings and errors. The following warnings can be ignored:

  • "Cloudera recomments settings /proc/sys/vm/swappiness to 0"
  • "There are mismatched versions across the system, which will cause failures. See below for details on which hosts are running what versions of components"
    • (this just refers to Java)
  • "Cloudera supports versions 1.6.0_31 and 1.7.0_55 of Oracle's JVM and later. OpenJDK is not supported, and gcj is known to not work. Check the component version table below to identify hosts with unsupported versions of Java."

Other reported issues may cause problems.

If there are no unexpected issues, then hit "Finish". This will take you to the configuration phase described below.

In my experience if you have to go "Back" at any point, or have to refresh the browser at any stage, then the install as a whole should be considered compromised. If this occurs then run "/opt/hadoop-infinite/scripts/uninstall_cd5.sh" on all nodes and then restart the install from the beginning.

User interface phase - setup

Select Custom Services in the "Cluster Setup" page (last option):

Select the following services and "Continue":

  • HDFS
  • MapReduce
  • ZooKeeper

The next page lets you control role assignments:

It is recommended to assign the "master" roles (NameNode, SecondaryNameNode, Balancer, HttpFS, JobTracker, all the "Cloudera Management Service" roles) to DB nodes (which have more flexible memory handling), and to balance them out across the available DB nodes as much as possible, to minimize the load on any one machine. (By default all the "master" roles are placed on the same server).

This page also lets you decide which nodes to run TaskTracker and DataNode roles (TaskTracker is needed to run a Map/Reduce job, and DataNode is for the HDFS distributed file system) - eg DB nodes only or API and DB nodes. We recommend installing on both API and DB nodes - if the API nodes prove to be overloaded, or you are not using Hadoop for heavy duty batch processing, you can always just stop the services on those nodes after installation.

For example in the above screenshot, it would be better to specify "ip-10-60-18-179.ec2.internal" as the SecondaryNameNode, the HttpFS, and 3 of the Management Services. This will balance the processing across the 2 nodes, as shown by this screenshot:

Once you have balanced the role assignments, press "Continue".

On the next page, use the "Embedded Database" (the default), "Test Connection", and then "Continue" once that is complete:

User interface phase - configuration

The next set of pages configure the various services and roles.

TODO

 

 

 

TODO

Installing CDH5 (YARN)

IKANOW is not yet integrated with CDH5 running YARN. Once it is, this section will explain how to:

  • Install CDH5 running YARN from scratch
  • Move from CDH5 MRv1 to CDH5 YARN.
  • No labels