Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »

Overview

Extracts documents from local files, fileshares, S3 repositories, and Infinit.e shares (eg uploaded via the file uploader). It can also be used to ingest the output of custom analytic plugins.

 

TODO

Format

{
	"display": string,
	"file":
	{
 		"username" : "string", // Username for file share authentication,
 		"password" : "string", // Password for file share authentication,
 		"domain" : "string", // Domain location of the file share, 
 
 		"pathInclude": "string", // Optional - regex, only files with complete paths matching the regular expression are processed further
 		"pathExclude": "string', // Optional - regex, files with complete paths matching the regular expression are ignored (and matching directories are not traversed)
 		"renameAfterParse" "string", // Optional, renames files after they have been ingested - the substitution variables "$name" and "$path" are supported; or "" deletes the file
		 	// (eg "$path/processed/$name")
 
 		"type": "string", // One of "json", "xml", "tika", "*sv", or null to auto decide
 
 		"XmlRootLevelValues" : [ "string" ], // The root level value of XML to which parsing should begin 
 			// also currently used as an optional field for JSON, if present will create a document each time that field is encountered
 			// (if left blank for JSON, assumes the file consists of a list of concatenated JSON objects and creates a document from each one)
 			// (Also reused with completely different meaning for CSV - see below)
			// (In office mode, can be used to configure Tika - see below)
 		"XmlIgnoreValues" : [ "string" ], // XML values that, when parsed, will be ignored - child elements will still be part of the document metadata, just promoted to the parent level. 
 		// (Also reused with completely different meaning for CSV)
 		"XmlSourceName" : "string", // If present, and a primary key specified below is also found then the URL gets built as XmlSourceName + xml[XmlPrimaryKey], Also supported for JSON and CSV.
 		"XmlPrimaryKey" : "string", // Parent to XmlRootLevelValues. This key is used to build the URL as described above. Also supported for JSON and CSV.
	}
}

Legacy documentation:

TODO

Description

The file extractor ingests various file types from their locations and performs processing based on the configuration.

Locations

The File Extractor is capable of ingesting files from the following locations:

  • Windows/Samba shares
  • harvester's local filesystem
  • Amazon S3

File Types

The File Extractor supports the following file types

  • Office documents (Word, Powerpoint etc.)
  • text-based documents (emails)
  • CSV
  • XML and JSON
  • Infinit.e shares
  • The results of Infinit.e plugins

Connecting to File Locations

the configuration will depend on the locations of the files you are trying to extract.

Local Filesystem

To connect to the text extractor's local filesystem the following url format must be used:

"file://<path including leading '/'>"

 

"file://" sources can only be run by administrators if secure mode is enabled (harvest.secure_mode=true in the configuration).

Local filesystem usage is mostly intended for testing, debugging, and "micro installations". The "tomcat" user must have read access to the directories and files on the path.

 

Infinit.e

You can connect the File extractor to Infinit.e shares and the results of custom jobs.

Infinit.e Shares

To connect to an Infinit.e share, the following url format must be used:

"inf://share/<shareid>/<ignored>"

The share id can be obtained in the url of the file uploader.

After the "<shareid>/" portion of the URL, any arbitrary text can be added to make the role of the share clearer. This text is ignored by the file harvester.

The source must share at least one community with the share in order to be processed.

Infinit.e Jobs

To connect to an Infinit.e custom job, the following url format must be used:

"inf://custom/<customid-or-jobtitle>"

custom id and title can be obtained in the URL field of the plugin manager.

After the "<customid-or-jobtitle>/" portion of the URL, any arbitrary text can be added to make the role of the share clearer. This text is ignored by the file harvester.

The source must share at least one community with the custom plugin in order to be processed.

Windows/Samba

To connect to a Windows/Samba share, the following url format must be used:

"smb://server:port/path"

 

Amazon S3

To connect to an Amazon S3 location, the following url format must be used:

"s3://bucket_name>/" or "s3://<bucket_name>/path/"

The files in the S3 bucket should be readable by the account specified by the access key.

S3 is not supported by default, the AWS SDK JAR must be copied into the classpath as described here.

username/password

A username/password is required to connect to your Amazon S3 environment.

For S3, the Access ID should be entered into the "username", and the Secret Key into the "password"

It is recommended for security that you create a separate AWS user with no permissisons other than S3 read/list on the directories.

domain

This field can be left blank for Amazon S3 environments.

File Types

This section describes the configurations for the various supported file types.

.SV Files

You can use XmlRootLevelValues to determine the root level field of the XML file at which parsing should begin.

For "*sv" files, this results in CSV parsing occurring automatically, and the records are mapped into a metadata object called "csv", with the fieldnames corresponding to the values of this array (eg the 3rd value is named after XmlRootLevelValues[2] etc)

The fieldnames can also be derived automatically by setting XmlIgnoreValues. In this case, XmlRootLevelValues need not be set.

Office Files

You can use the XmlRootlevelValues parameter to configure Apache Tika for parsing of Office-type files.

There are currently 2 types of configuration supported:

  • "output:xml" or "output:html" to change the output of Tika from raw text to XML or HTML.
  • Strings of the format "MEDIATYPE:{ paramName: paramValue, ...}" - <MEDIATYPE> is in standard MIME format and determines which Tika element to configure; the paramNames and paramValues correspond to functions and arguments.

Examples:

Example: "application/pdf:{'setEnableAutoSpace':false}" ... will call PDFParser.setEnableAutoSpace(false)

 

 

 

IN PROGRESS

Legacy documentation:

TODO

Examples

TODO

  • No labels