Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Overview of the Infinit.e Data Harvesting Process

The Infinit.e platform features a robust set of data harvesters that give Infinit.e a powerful data extraction and transformation (enrichment) capabilitie. Infinit.e's harvesters are designed to consume data from a variety of sources and media types including:

  • Web based content accessible via URL including:
    • Static HTML content;
    • RSS and ATOM based news feeds;
    • Restful web services interfaces.
  • Traditional relational database management systems (RDBMS) via Java Database Connectivity (JDBC) drivers;
  • Files located on local and network attached storage devices.

Source Pipeline

Harvesting and enrichment is a logical process based around the concept of applying a pipeline of processing elements to documents emanating from a source.

The following high level steps are

...

  1. Extract data from source, turn into documents, extract metadata from sources for XML, PDF etc (harvesting)
  2. Enrich source data by extracting entities, events, geographic/location data, etc. This is broken down into the following phases (enrichment; note: the roadmap is to move this to a completely user-defined UIMA chain):
    1. Structured Analysis Handler, phase 1: fill in unstructured document-level fields (title, description, full text) from metadata, if needed.
    2. Unstructured Analysis Handler, phase 1: use regexes and javascript to pull out new metadata fields from the unstructured document-level fields.
      1. (Special case: if Tika is specified as the text extraction engine, then this is performed before any Unstructured Analysis Handler)
    3. Unstructured Analysis Handler, phase 2: use regex replaces to transform the source text, if needed.
    4. Unstructured Analysis Handler, phase 3: use regexes and javascript to pull out new metadata fields from the cleansed unstructured document-level fields.
    5. Standard extraction, phase 1 (text extraction): use a "text extractor" to create the text that is submitted to the entity extraction service in the next phase (if needed, often the entity extraction service will combine the 2 phases).
    6. Standard extraction, phase 2 (entity extraction): use an "entity extractor" (eg AlchemyAPI) to pull out entities and associations from the submitted text/URL.
    7. Structured Analysis Handler, phase 2: the remaining document-level field (URL, published data, document geo ... plus the title and description and full text if these returned null before, ie in case the UAH has filled in required fields)
    8. Structured Analysis Handler, phase 3: create new entities from the metadata, combine entities from all phases into associations.
  3. Update entity counts/aggregates (generic processing - statistics)
  4. Store finished within Infinit.e's MongoDB data store and Elasticsearch index (generic processing - aggregation)

applied to the source data, although there is considerable flexibility in th eorder of pipeline elements.

The pipeline elements can be approximately grouped into the following categories:

  • Extractors: generate mostly empty Infinit.e documents from external data sources
  • Globals: generate javascript artifacts that can be used by subsequent pipeline elements
  • Secondary extractors: Enables new documents to be spawned from the existing metadata
  • Text extraction: manipulation of the raw document content
  • Metadata: generate document metadata such as title, description, date; and arbitrary content metadata using xpath, regex, and javascript
  • Entities and associations: create entities and assocations out of the text
  • Storage and indexing: decide which documents to keep, what fields to keep, and what to full text index (for searching using the GUI/API)

Creating a Source

The following WIKI pages describe detail the steps involved with creating sources:

...

  1. Extractors
    How to specify the mechanics required to extract data from a source system:
    1. File extractor

...

    1. Feed extractor
    2. Web extractor
    3. Database extractor
  1. Entities and associations
    An introduction to the Structured Analysis Harvester and how to specify the methods for enriching structured data sources with geographic information, entities, and events.
    1. Specifying Document Level Geographical Location

...

    1. Manual entities
    2. Manual association of entities
    3. Javascript globals
    4. Transforming data with JavaScript
  1. Metadata
    1. Document metadata
    2. Content metadata

A simple web-based GUI is available in conjunction with the structures described in these pages.

Source Reference Documents

Source Document Specification

The following links provide detailed information regarding the objects that make up a Source document and the individual fields within each object

...

...

Pipeline Documentation

Sample Source Documents

The following sample source documents are provided as an aid to learning how to create your own sources:

Source APIs:

Panel

In this section:

Table of Contents
maxLevel2
indent16px