Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

UNDER CONSTRUCTION

Suspending and deleting sources

Widget Connector
urlhttphttps://www.youtube.com/watch?v=XnUMaBnH_8w

As outlined in the video above, once Datasift sources have been created, as detailed here (video and documentation), then they can be controlled from the source editor.

The following operations can be performed:

...

To suspend an active source, simply navigate to the source editor tab of the manager, select the desired source from the list on the left, and select the "Disable Source" button (clicking "OK" to confirm publishing the source when prompted).

...

Suspended sources can be re-activated as described below.

Re-

...

activate suspended sources

Re-activating a suspended source is the same operation: the source will have a "Enable Source" button in the same place, and clicking on it and confirming will perform the following:

...

To delete a source follow the steps from the source editor documentation. Deleting a source will automatically delete the corresponding Datasift push subscription.

...

If it is necessary to make minor modifications to the JCSDL then this can be done without deleting and re-creating the source. Instead, suspend the source, wait two minutes (to be sure the existing Datasift push subscription has been terminated), modify the JCSDL in the subscription (this is one of the fields in the source editor), and then activate it (which will automatically publish the change).

Info

If it is requested enough then we will add a graphical editor into the Datasift connector widget so as to be able to modify sources more formally.

Further reading:

Creating aliases and discarding unwanted entities

Widget Connector
urlhttphttps://www.youtube.com/watch?v=op0OUlrKojQ

...

Info

One downside to this is that there is no way within the widget of transferring aliases from the sandbox to a real community (eg once you are happy with them). However the sub-section "Manually setting alias configurations" explains how this can be achieved easily using the File Uploader page instead.

Creating synthetic alias masters

...

As with all Infinit.e GUI functionality, the Entity Alias Builder widget is just an interface to our open API.

Aliases are stored in Infinit.e as JSON shares of type "infinite-entity-alias". Their format is described here. They can be manually uploaded and shared between communities using the File Uploader manager page.

This can be useful for 2 purposes:

  • Where there are large numbers of aliases to be generated, it would not be much fun to use the GUI for each one. Instead you can programmatically generate (eg with a script) a JSON file containing the aliases and then upload it.
  • This is a bit beyond the scope of this documentation, but you can also create a plugin (eg using the Javascript scripting engine) and then create a share with type "infinite-entity-alias" that points to the custom plugin results (this is described in the File Uploader documentation). 
    • So as an example if you have a word document that lists lots of social media handle mappings, then you could upload that as a share, then import that share as a source (this is discussed further below under "Importing other sources"), then write a Javascript plugin (see below under "More complex analytics") that parses the document into the right format, and then finally point a share to that! 
      • This would have the nice feature that it would automatically update itself whenever the document was re-uploaded.

...

Info

A few things to note when using aliases in multi-user environments:

  • Anyone can create a share that defines an alias, but aliases are only applied when "endorsed". For security purposes, only a user with role "content publisher" or above can endorse shares (see the Community Manager documentation to see how to change users roles, this is also discussed below under "Adding Communities"). You can endorse via the API, but in general the best way of re-endorsing a share for a community is to unshare it for that community, submit it, and then reshare it (as an administrator or moderator) using the File Uploader.
  • The widget does not support multi-user environments that well - once one user has created an alias share for a community, then only that user or a moderator/administrator can modify it. 
    • The idea is that one user per community should be responsible for the aliases for that community to avoid confusion. We anticipate improving the level of support in the future, as we get more feedback from our operational deployments.

...

More complex analytics and visualization

VIDEO COMING SOON!

UNDER CONSTRUCTION (Note the functionality is already present in the AMI, we just need to write the video and associated documentation.

Further reading:

Exporting the data (and alerting, and backups)

Using the API

An important feature of the Infinit.e platform is that it wants data to be open: our User Interfaces and applications use our open RESTful API, so any other client can get the same data.

The primary method of getting at the data is via the query API call, and that linked page shows some examples of making the call in javascript and actionscript. In addition we have a beta (ie undocumented!) Java driver here (that we use internally, so is well supported). There are more, general, examples of using the API in different languages here.

In the context of using the query API to support bulk export of the data, this section of our knowledgebase describes how to use the "curl" command line utility (in Linux, or MacOS, or cygwin on Windows platforms) to script getting all the data out.

MongoDB dumps and backups

The underlying data store for Infinit.e is the popular NoSQL database called MongoDB. If you have ssh access to the server then you can use mongodump or mongoexport to get at the data. This image describes the database format.

It is worth noting whiile discussing Mongo that a nightly backup of the data is generated (at 1am) and stored at "/opt/db-home/" as "db_backup_<<hostname>>_most_recent.tgz". Currently nothing is done with this file (ie it is overwritten nightly). It is recommended that you upload this to S3 regularly (it was not possible to pre-configure this because of AWS restrictions). More details on the backup process are provided here.

GUI utilities

The main GUI provides three ways of saving the data or workspace state (see screenshot below):

  • "Copy workspace link to clipboard": This copies a (long!) URL to the clipboard that will return you to the current query, community set, and widget set when pasted into a browser. 
    • Note this URL is too long for some applications to handle (eg gmail unfortunately) - a forthcoming release will use a link shortener.
  • "Create PDF for current data view": This will open a new tab containing a PDF that contains screenshots of all the open widgets together with information about the query that was used.
    • (Widgets can be programmed to write more detailed information into the PDF, though currently only the Doc Browser widget takes advantage of this.)
    • As an alternative the second section of this blog post describes generating per-widget screenshots. This has been very popular for creating "quickview" presentations.
  • "Export JSON for current data": This saves a file to local disk containing the JSON returned from the query. The format is described here.

Image Removed

One final export mechanism is another optional part of the widget API, and is currently supported by the following widgets:

  • Event graph: exports the graph to GraphML (all edges/nodes except those filtered out)
  • Map: export to KML

The screenshot of a widget header below shows the icon that appears when this per-widget export is enabled:

Image Removed

It is also possible to load and save queries (JSON) to disk, from the "Advanced Query Builder".

Alerting using RSS

The final option in the "Options" screenshot above ("Create RSS feed for current query") has been very popular with our users. Selecting that option opens a new tab in the browser containing a (long!) URL that generates an RSS feed for the current query. This feed can be used in RSS readers or alerting tools supporting RSS (an access key is embedded in the URL so no authentication is required on the RSS reader side).

Further reading:

Importing other sources

Although the focus of this Amazon AWS Marketplace product is to allow users to ingest social media easily from datasift, the entire Infinit.e community platform is included.

Infinit.e is a general purpose tool for harvesting, enriching, and analyzing data of many different types from many different sources, including filesystems and enterprise Intranets, databases, and the Web. 

This section provides a brief description of the more general harvesting functionality, and mainly a list of resources for users who want to explore these additional capabilities. Note that Infinit.e provides a rich and complex framework (though with simple shortcuts and templates where possible), and it is beyond the scope of this web page to document it fully.

Overview of harvesting in Infinit.e

Harvesting in Infinit.e is controlled by JSON documents called sources. These sources can be tested by POSTing to the Config - Source - Test REST endpoint, and activated/updated ("published") by POSTing to the Config - Source - Save REST endpoint.

In practice the Source Manager GUI can be used to perform these activities in a more visual intuitive way. It still requires building the source JSON with limited development support - as can be seen from the documentation here, this requires some javascript skills and some effort. The source manager provides some templates to get up and running on simpler types of ingest, and there is a source gallery with some real world examples of various complexities.

(In addition, our enterprise offering provides a visual "ETL" tool)

Quickly importing sources using the Chrome extension

For pulling public RSS feeds and HTML pages we provide a Chrome extension that gives a "1-click" import capability. This is described here.

Enrichment and entity extraction

One augmentation feature that is provided by Datasift and is therefore not applied to data imported via sources is the entity extraction provided by Salience. The Infinit.e platform provides the following Enrichment enginesalternatives:

  • TextRank: Extracts keywords similarly to Salience/Datasift (though less well)
  • (connector) AlchemyAPI: You can register for an API key with AlchemyAPI and use their service, which is integrated into Infinit.e. AlchemyAPI have a free tier allowing 1000 transactions/day. This connector pulls named entities only by default, but does include sentiment.
  • (connector) AlchemyAPI-metadata: This is another connector to AlchemyAPI, which provides keywords but no entity extraction - it is best used for short/badly formatted sources like twitter.
  • (connector) OpenCalais: OpenCalais is an alternative to AlchemyAPI - it focuses on business and politics, and doesn't have sentiment but does provide "business associations" (takeover rumors, that sort of thing). It has a significant free tier, offering 50,000 transactions per day once you register for an API key.

Note that these entity extractors all have different ontologies, eg their types are slightly different and their "disambiguation formats" also ("State" vs "StateOrProvince"; "Paris, Texas" vs "Paris, Texas. USA"), which is not ideal for combining with the built-in Salience augmentation since the same entity will appear in different forms. The entity aliasing function can be used to clear up some of these issues (eg for important entities; or a custom job could generate aliases automatically based on extracted data using some simple heuristics).

(Note that Salience does have a SaaS version, called Semantria, which offers a one time free 10,000 transaction usage. We have not built a connector to Semantria (or used it in any way), though it would be easy enough for us or another developer to do).

(Note also that our enterprise offering provides the same Salience NLP engine that Datasift uses, which would enable external sources to be integrated seamlessly with Datasift's social media)

Adding users and communities

TODOthe documentation in this section just needs to be completed)

Overview

In previous sections we have seen how the query function returns a subset of the matching documents, together with some basic averaging statistics, and how this is sufficient for many standard data driven investigations.

In other cases, particularly as your activities move from data investigation to data science, it becomes necessary either to apply either more complex algorithms (for example graph theory or social network analysis), or to calculate standard statistics in domain-specific (a very simple example of this would be aggregating sentiment geographically).

Image Added

In order to support these sorts of operations, Infinit.e provides the ability to plug-in analytic modules that can run over any subset of the data (including all of it). The general topic of building plug-in modules and scheduling and running them is beyond the scope of this documentation; this section will provide links to the Infinit.e documentation and describe the aspects most relevant to Datasift. 

In particular, we have provided 3 sample jobs that illustrate a few different types of analytic and demonstrate how to access the document objects, and interpret the results (in practice the Infinit.e-specific bits like this are very easy, the difficulty is typically in building the algorithms themselves, as it should be).

Info

Note that the examples below assume that you have an instance into which you have loaded some sources and collected some data, using the techniques described in previous sections.

Example 1 - Aggregate sentiment by geo

This sample analytic can be seen in the screenshot above. It simply creates a 10 degree x 10 degree grid and aggregates the sentiment associated via geo-tagged entities with those grid squares.

The sequence of the screenshots below shows how to access the example (starting from the manager webapp, eg press the "MANAGER" link in the top right of the main GUI):

Image AddedImage AddedImage Added

The key screenshot above is the middle one, which shows the "scary-at-first-glance" plugin manager. This manager is documented in more detail here, we'll focus for now on the following components:

  • The dropdown menu at the top lists available tasks. Selecting one fills in the rest of the form, as shown.
  • The "QuickRun" or "Save and Debug" buttons save the current settings and run the job (on a subset of the records in the latter case). You can see from the status message that (on 8.5K records), the job took ~19s to complete and generated 44 aggregated records.
    • (both these options don't reload the page until the job has completed - to run the job asynchronously you can use "Submit" instead, this is dicussed below).
  • Once the job has run you can view the results in a separate tab by pressing the "Show results" button. 
    • Note that this new tab is just directly using the Custom - Get Results RESTful API call, and unless your browser is configured to render JSON will not be nicely formatted - we use the Chrome/Firefox extension JSONView.

Regardless of how nicely formatted the JSON is, in practice it is preferable to have a graphical view of the resulting data. The Infinit.e application comes with two widgets for this purpose:

  • Custom Viewer - Map: Finds fields in the record that "look like" lat/long points and plots them on a map (MapQuest) colored according to a score defined by a numeric field in the same record (see below for more details)
    • The rules for "looks" like lat/long are as follows:
      • Is a top-level object called "geo", "geotag", "latlong", "latlon" AND consists of 2 numeric field (or strings representing numbers) with names "lat" and "lon" or "latitude" and "longitude"
      • Has two top level numeric fields (or strings representing numbers) with names names "lat" and "lon" or "latitude" and "longitude"
  • Custom Viewer - Bar Graph: Uses any field from the record as a key, and plots a bar of height defined by a numeric field in the same record (see example 2 for more details).

In addition we have successfully used the free jsfiddle service to visualization analytics - see this blog post for more details.

For this example, the "Custom Viewer - Map" is the obvious choice. The screenshot below shows the different options from the header.

Image Added

  • The first dropdown menu selects the plug-in from which to take the results.
    • (The selection will fail if the widget cannot detect any fields that look like lat/long. This is discussed below, under "Visualizing the output of plug-ins")
  • The second menu allows the user to select which field determines the color of the plotted points (from the palette of green/blue/orange/red). 
    • (This job has generated two numeric fields, the aggregated sentiment, and the number of records containing sentiment)
  • The third menu determines how the score field is converted into a color:
    • Linear scale: the lowest score is green, the highest score is red, the buckets are distributed evenly from min to max.
    • Log scale: the lowest score is green, the highest score is red, the buckets are distributed logarithmicly from min to max.
    • Polarity: Red is negative, Green is positive, Blue is neutral (less than 10% of the max in either direction).

Returning back to the plugin manager, there were two larger text fields:

  • "Query" field: together with the "Communities" list, this controls what data is processed
  • "User arguments" field: in this case this is actually the code that is run over the data. This is because it is a Javascript plugin, see below under "Creating new Javascript plug-ins".
    • (Note that for Hadoop JARs this provides generic configuration parameters, see below under "Creating new Hadoop plug-ins")

In this case we can see that the query is:

Code Block
languagejs
{"docGeo":{"$exists":true}}
//^^ (ie only process geo-tagged tweets, eg from cellphones)

There are a few points to note here:

  • The overall syntax of the query is that of MongoDB
    • There are some additional extensions starting with "$": these are documented here, and can be inserted either manually of by by pressing the "Add Options" button that is next to the query.
  • The document fields that the query is applied against are described here.
    • You can view the JSON format of a given document from the "Document Browser" widget, as shown in the screenshot below.

Image Added

As an example, say you wanted to query on only records that were tagged by datasift with gender "Male". There would be two ways of doing this:

Code Block
//Option 1, simplest (see datasift documentation for their metadata format):
//TODO
//Option 2, most generic:
//TODO

The advantage of Option 2 would be that if you later imported other sources that had a "Gender" entity but weren't from datasift (eg had a different metadata format), then you would not have to alter your queries.

 

TODO example - changing the query

TODO map/reduce code

Example 2 - Aggregate sentiment by gender

TODO

Example 3 - Show top co-references

TODO

Creating new Javascript plug-ins

TODO

Creating new Hadoop plug-ins

TODO

Run-time options for plug-ins

TODO

Visualizing the output of plug-ins

TODO more details on how to format fields to be usable

TODO the advanced option

TODO jsfiddle

Further reading:

Exporting the data (and alerting, and backups)

Using the API

An important feature of the Infinit.e platform is that it wants data to be open: our User Interfaces and applications use our open RESTful API, so any other client can get the same data.

The primary method of getting at the data is via the query API call, and that linked page shows some examples of making the call in javascript and actionscript. In addition we have a beta (ie undocumented!) Java driver here (that we use internally, so is well supported). There are more, general, examples of using the API in different languages here.

In the context of using the query API to support bulk export of the data, this section of our knowledgebase describes how to use the "curl" command line utility (in Linux, or MacOS, or cygwin on Windows platforms) to script getting all the data out.

MongoDB dumps and backups

The underlying data store for Infinit.e is the popular NoSQL database called MongoDB. If you have ssh access to the server then you can use mongodump or mongoexport to get at the data. This image describes the database format.

It is worth noting whiile discussing Mongo that a nightly backup of the data is generated (at 1am) and stored at "/opt/db-home/" as "db_backup_<<hostname>>_most_recent.tgz". Currently nothing is done with this file (ie it is overwritten nightly). It is recommended that you upload this to S3 regularly (it was not possible to pre-configure this because of AWS restrictions). More details on the backup process are provided here.

GUI utilities

The main GUI provides three ways of saving the data or workspace state (see screenshot below):

  • "Copy workspace link to clipboard": This copies a (long!) URL to the clipboard that will return you to the current query, community set, and widget set when pasted into a browser. 
    • Note this URL is too long for some applications to handle (eg gmail unfortunately) - a forthcoming release will use a link shortener.
  • "Create PDF for current data view": This will open a new tab containing a PDF that contains screenshots of all the open widgets together with information about the query that was used.
    • (Widgets can be programmed to write more detailed information into the PDF, though currently only the Doc Browser widget takes advantage of this.)
    • As an alternative the second section of this blog post describes generating per-widget screenshots. This has been very popular for creating "quickview" presentations.
  • "Export JSON for current data": This saves a file to local disk containing the JSON returned from the query. The format is described here.

Image Added

One final export mechanism is another optional part of the widget API, and is currently supported by the following widgets:

  • Event graph: exports the graph to GraphML (all edges/nodes except those filtered out)
  • Map: export to KML

The screenshot of a widget header below shows the icon that appears when this per-widget export is enabled:

Image Added

It is also possible to load and save queries (JSON) to disk, from the "Advanced Query Builder".

Alerting using RSS

The final option in the "Options" screenshot above ("Create RSS feed for current query") has been very popular with our users. Selecting that option opens a new tab in the browser containing a (long!) URL that generates an RSS feed for the current query. This feed can be used in RSS readers or alerting tools supporting RSS (an access key is embedded in the URL so no authentication is required on the RSS reader side).

Further reading:

Importing other sources

Although the focus of this Amazon AWS Marketplace product is to allow users to ingest social media easily from datasift, the entire Infinit.e community platform is included.

Infinit.e is a general purpose tool for harvesting, enriching, and analyzing data of many different types from many different sources, including filesystems and enterprise Intranets, databases, and the Web. 

This section provides a brief description of the more general harvesting functionality, and mainly a list of resources for users who want to explore these additional capabilities. Note that Infinit.e provides a rich and complex framework (though with simple shortcuts and templates where possible), and it is beyond the scope of this web page to document it fully.

Overview of harvesting in Infinit.e

Harvesting in Infinit.e is controlled by JSON documents called sources. These sources can be tested by POSTing to the Config - Source - Test REST endpoint, and activated/updated ("published") by POSTing to the Config - Source - Save REST endpoint.

In practice the Source Manager GUI can be used to perform these activities in a more visual intuitive way. It still requires building the source JSON with limited development support - as can be seen from the documentation here, this requires some javascript skills and some effort. The source manager provides some templates to get up and running on simpler types of ingest, and there is a source gallery with some real world examples of various complexities.

(In addition, our enterprise offering provides a visual "ETL" tool)

Quickly importing sources using the Chrome extension

For pulling public RSS feeds and HTML pages we provide a Chrome extension that gives a "1-click" import capability. This is described here.

Enrichment and entity extraction

One augmentation feature that is provided by Datasift and is therefore not applied to data imported via sources is the entity extraction provided by Salience. The Infinit.e platform provides the following Enrichment enginesalternatives:

  • TextRank: Extracts keywords similarly to Salience/Datasift (though less well)
  • (connector) AlchemyAPI: You can register for an API key with AlchemyAPI and use their service, which is integrated into Infinit.e. AlchemyAPI have a free tier allowing 1000 transactions/day. This connector pulls named entities only by default, but does include sentiment.
  • (connector) AlchemyAPI-metadata: This is another connector to AlchemyAPI, which provides keywords but no entity extraction - it is best used for short/badly formatted sources like twitter.
  • (connector) OpenCalais: OpenCalais is an alternative to AlchemyAPI - it focuses on business and politics, and doesn't have sentiment but does provide "business associations" (takeover rumors, that sort of thing). It has a significant free tier, offering 50,000 transactions per day once you register for an API key.

Note that these entity extractors all have different ontologies, eg their types are slightly different and their "disambiguation formats" also ("State" vs "StateOrProvince"; "Paris, Texas" vs "Paris, Texas. USA"), which is not ideal for combining with the built-in Salience augmentation since the same entity will appear in different forms. The entity aliasing function can be used to clear up some of these issues (eg for important entities; or a custom job could generate aliases automatically based on extracted data using some simple heuristics).

(Note that Salience does have a SaaS version, called Semantria, which offers a one time free 10,000 transaction usage. We have not built a connector to Semantria (or used it in any way), though it would be easy enough for us or another developer to do).

(Note also that our enterprise offering provides the same Salience NLP engine that Datasift uses, which would enable external sources to be integrated seamlessly with Datasift's social media)

Adding users and communities

The documentation so far has assumed that the application has run in its default configuration, with:

  • 1 user: Admin, infinite_default@ikanow.com
  • 1 data community: the system community
    • (plus the Admin user's personal community that is used to store temporary alias settings, saved widget configurations, etc)

In practice Infinit.e is a multi-user application that also allows multiple "communities", allowing for separation of data, aliases, custom jobs, access controls etc.

Creating new users can be performed from the Person Manager GUI.

Creating new communities can be performed from the Community Manager GUI.

A few points to note:

  • After a community is created, only the owner is initially added. Other users can be added (or removed) by selecting the "Add New Members" (or "Edit/Remove Members") at the bottom of the right pane for the selected community.
  • For users to be able to add new sources, they must either be system administrators, or be added as "Content Publisher" or better in their community role (right hand pane after selecting a user from the "Edit/Remove Members" page).
    • (In secure mode, see below, users must be administrators to create new sources)
Warning

The secure mode of Infinit.e that is needed to guarantee system security in multi-user environments is not enabled by default. Check here for more details.

Updating the software

There are two separate components installed on the Amazon image:

...

To update the core platform, SSH into the instance and then follow the instructions provided here. There are monthly releases (though you certainly do not have to upgrade that often), described here.

There is currently not an automated way to upgrade the additional components. Should patches be required, we will update the Amazon image and also provide instructions to existing customers on how to obtain the latest binaries and update their existing images.