Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Note that sharding is not fully supported (or at least not fully tested) as of the March 2012 release. Apart from one weekly maintenance script (that is awaiting a new MongoDB) feature, we believe it should work. As of 3M documents indexed, sharding is not necessary in case. Assuming sharding is enabled, the top-level design page explains how the system scales.

As an alternative to load balancers, DNS round robin load balancing (using Amazon's Route 53) has also been tested and works well.

The remaining sections describe the different steps necessary to get up and running. Note that steps 3-5 can be performed interchangeably, and it is not necessary to finish one step before starting the next. Also API nodes can be added to the load balancer before they are complete (they will appear as out of service until the system is working).

...

A single file is used to populate the configuration files for all the custom and standard technologies used in Infinit.e: "infinit.e.configuration.properties". A template for this file can be obtained here.

A full description of the fields within "infinit.e.properties.configuration" is provided here, but the EC2-specific automated configuration makes populating it considerably easier than in the general case. The remainder of this section describes the EC2-specific configuration.

...

Step 3: Start a load balancer

Info

Amazon Elastic Load Balancers have non-configurable timeouts (eg 60 seconds). This can cause problems to some of the Infinit.e operations, such as testing and deleting sources and documents.

You can request Amazon to increase the timeout on their EC2 forums, and they will normally do it within a day or 2. Example forum post I made.

An alternative is to use the load balancer only to provide automated health-checking of the API, eg and to use Amazon's DNS service, Route 53, for round-robin load balancing (delegating the "rr" subdomain of ikanow.com: useful link).

We provide a template for this (here), though actually the AWS management console interface is just as good, the only custom parameter is the health check target, which should be set to "HTTP:80/api/auth/login/ping/ping".

...

  1. Navigate to the CloudFormation tab in the AWS management console.
  2. Select "Create New Stack"
  3. Either upload the template (if you've modified it) via "Upload a Template file" or specify TODOLINK in "Provide a Template URL".
  4. Select a "Stack Name" and click Next/Finish where prompted.
  5. The Load Balancer URL can be found either from the "Output" tab in CloudFormation or from the EC2 tab, then the navigation bar "NETWORK & SECURITY" > "Load Balancers".

...

The precise steps vary depending on whether the config servers are standalone (recommended for operational deployments if sharding is enabled) or run on the same node (for unsharded/small/dev/test deployments). how the config server node is deployed:

  • The standard deployment is to run 1 or 3 (or 5) standalone config servers (generally on very cheap micro instances).
  • For smaller or test deployments, a single config server can be co-located with one of the DB nodes. 

As noted above, it is likely that you will be running unsharded deployments, both because even pretty large clusters (with many API nodes) still perform well with only 2 DB nodes in 1 replica set (and also because at release, sharded is largely untested operationally!)

Step 4 - Scenario 1:

...

DB nodes with 1 co-located config server

As for the load balancer, navigate to the "CloudFormation" tab, select "Create New Stack", upload/link to the DB template (single node or replica pair), select a "Stack Name" (for display only) and then "Next" to the configuration parameters.

...

  • ClusterName: the cluster name, should match the "infinit.e.configuration.properties" file.
  • IsConfigSvr: should be set to "1" for the first node created, "0" after that (for combined config server/DB scenarios onlyReplicaSetId). 
    • Note that once one config server has been started like this, adding extra config servers will stop new DB nodes from starting successfully.
  • ReplicaSetIdsFor unsharded deployments (as set in "infinit.e.configuration.properties"; almost certainly what you will be running), just leave as 1 all the time. For sharded deployments, use "1" for the first 2 nodes, "2" for the second 2 nodes, etc.
    • It is also possible to make a node join multiple replica sets by setting a comma-separated list, eg "1,2,3" to belong to 3 replica sets (one DB process is created per replica set). This is not recommended for typical usage, but could be useful eg to use a single node for multiple "slaves" (the low performance won't matter because there'll never be queried in practice)
  • NodeName: The name displayed in the EC2 instances. For the replica pair template, the actual names are "<NodeName>-1" and "<NodeName>-2".
  • ConfigFileS3Path: the location of the "infinit.e.configuration.properties" file in your S3 storage.
  • AwsAccessId: The AWS ID/Access Key for your Amazon account.
  • AwsAccessKey: The AWS Key/Secret Key for your Amazon account.
  • AvailabilityZone: Must be consistent with the availability zone from which the stack was launched (top left of CloudFormation tab)
  • SecurityGroups: Set the security group from Step 1.
  • KeyName: Set the key from Step 1.

...

  • InstanceType: Defaults to "m1.xlarge", which is what you want for any decent sized deployment; use "m1.large" for test/demo clusters. Note that if "m1.xlarge" then RAID is automatically installed on startup (which takes about 10 minutes).
  • IsStorageNode: (leave as 1).
  • QuickInstall: Defaults to "–fast", which saves 15 minutes on node start-up but will not update the system packages from whatever AMI is in use. Set to "–slow" instead for a more up to date OS.

Note that in practice you will probably want to override the default templates, so that standard fields like ClusterName (unless you have multiple clusters in the same AWS account), ConfigFileS3Path, AwsAccessId, AwsAccessKey, AvailabilityZone, SecurityGroups and KeyName (ie basically everything!) are set to default parameters and can normally be ignored.Note also that while CloudFormation stacks were designed to create entire stacks (eg load balancer, API nodes, replica sets), we only use them for individual elements (eg one for load balancer, one for API nodes, one for DB nodes). This is because the CloudFormation templates do no allow addition (/less importantly removal) of nodes except via the unsuitable AWS auto scaling function.

Step 4 - Scenario 2: Standalone config servers

First start the 1/3/5 config servers. This will require the same steps as above except:

...

There are specific templates for a single or three-node configurations (the 5-node case is an easy tweak to the existing template, if needed). The config server parameters are the same as DB but without the unnecessary ReplicaSetIds, IsConfigServer, IsStorageNode.

The config server Cloudformation template also creates a DNS entry in Route53 for a user-specified Hosted Zone. This is necessary because of a bug in MongoDB where changing the hostname of a config server (eg because the EC2 instance becomes unstable so a new node must be created) requires a complete cluster restart (in order: shutdown API nodes, DB nodes, config nodes; startup config nodes, DB nodes, API nodes). The DNS entry is written into the EC2 metadata in the "DnsName" field.

The only other differences is that InstanceType is one of "t1.micro" or "m1.large".

...

(Alternatively used the "DB Config Server" template providedThe micro instance should be fine in most cases (and is >10x cheaper).

Then start the main DB nodes, again just as aboveScenario 1, except:

  • IsConfigSvr should always be "0", otherwise system-wide problems will occur.
  • DnsName should be present, unique, and point via CNAME to the actual hostname, otherwise system-wide issues may occur

Step 5: Start API nodes

The API nodes can then be started. It is difficult to provision in advance the number of nodes because it heavily depends on usage patterns and sort of documents being indexed. It is therefore recommended to start with 2 and add new ones if response times are too long.

...

  • InstanceType: Defaults to "m1.xlarge", which is what you want for any decent sized deployment; use "m1.large" for test/demo clusters. Note that if "m1.xlarge" then RAID is automatically installed on startup (which takes about 10 minutes).
  • QuickInstall: Defaults to "–fast", which saves 15 minutes on node start-up but will not update the system packages from whatever AMI is in use. Set to "–slow" instead for a more up to date OS.

As with the DB nodes, in practice you will probably want to override the default templates, so that standard fields like ClusterName (unless you have multiple clusters in the same AWS account), ConfigFileS3Path, AwsAccessId, AwsAccessKey, AvailabilityZone, SecurityGroups and KeyName (ie basically everything!) are set to default parameters and can normally be ignored.The same comments as for the DB node about using CloudFormation somewhat sub-optimally also hold. It is particularly noticeable for API nodes because it results in one final step, discussed in the next section.

Step 6: Connect the API nodes to the load balancer

...

You now have a fully operational Infinit.e cluster. Start adding sources and you can begin analysis. This link provides a quick example of getting a source imported in order to test/demonstrate the GUI.

It takes about 20 minutes for a node to come online following start-up. Most of this time (10-15 minutes) is spent updating the packages (like Java and JPackage) from the defaults on the CentOS 5.5 AMI. Therefore the time-to-start could be significantly improved by building a new custom AMI, starting from the base AMI, installing infinit.e-prerequisites-online RPM, and then creating the new AMI (we used that link to generate this script file on github)Note that while CloudFormation stacks are primarily intended to start entire clusters, this is not practical for Infinit.e because the only way of adding or subtracting nodes is with Amazon Auto Scaling (ie not manually except by treating each node as a separate stack, as we do), and the available node addition/removal criteria do not map well onto how Infinit.e resource management works. Therefore each CloudFormation stack is normally a single node (apart from the 3-node config server and 2-node replica set "convenience" templates). 

For "quick installs", it takes about 15 minutes for "m1.xlarge" API nodes to start-up (10 minutes of this is the RAID setup - so "m1.large" take about 5 minues). DB nodes take 5-10 minutes longer (MongoDB initialization time).

If startup time is important then the base AMI provided can be extended manually (eg installing the RPMs by hand) and then saved as a new AMI. If RAID is required together with quick node startups then EBS nodes will need to be used in place of the ephemeral storage, and similarly for pre-initialization of the DB.