...
Page | Reviewed by Alex | Other's Comments | Andrew Comments | Alex Comments | Status | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
File extractor | I have made the necessary changes in response to Comments (05/20/2014)
-would it be possible to have an SV example in the source gallery for when XMLIgnoreValues are used to auto derive field names. (05/20/2014)
| (As per db extractor comment), call out specifically how the url is constructed in the different cases:
There are some general fields (renameAfterParse, path*, mode) that need to be documented (see the TODO) Is there a reason you have different sections for CSV and SV? (CSV is just the default case of SV, ie the seperator is the default comma) The *sv documentation is a bit unclear i think - it's much simpler (at least in 90% of cases) ... in most cases the configuration is just either setting the columns or making it in auto config mode, so i'd focus around that ... then ignoring other header fields, and setting the separator/quote/escape (plus the url setting which is general for xml/json/sv and is described above). I'm sure there are lots of better csv configuration documents out there than my original one (http://logstash.net/docs/1.4.0/filters/csv) so feel free to find one of those to start with! I have similar comments about the JSON/XML ... the main thing is just selecting the root object so i'd start with an explanation of that (it's very similar between xml/json so probably copy/paste) and then the other fields |
| ||||||||||||||||
Feed extractor | DONE |
| |||||||||||||||||
Web extractor | Caleb: would be useful to have an example w/ title/desc in the extraUrls to point out how this is different from feed extractor, or something separating the 2. I'm not sure if "web extractor" is a very good name either, maybe URL extractor or something? Web is ambiguous | I have addressed your comment by highlighting what distinguishes feed from web extractor, but we still require an example in the source gallery that includes title, description etc. (06/03/2014) |
| ||||||||||||||||
Database extractor | I have made the necessary changes in response to Comments (05/20/2014) -Requires example urls for connecting to the database when PrimaryKeyValue is specified and when it is not.
| There's another missing field that has changed between legacy and pipeline - the database object now has a "url" field (that was previously in the source top level) ... if no value is specified for 'primaryKeyValue' (hmm this also seems to be missing from the documentation it is in the code here: https://bitbucket.org/ikanow/ikanow_infinit.e_community (Re authentication: Made minor update to correct error in legacy documentation, and to reflect v0.3 functionality change) |
| ||||||||||||||||
Follow Web links | Drew: Haven't reviewed the whole doc, but definitely should include an example of a splitter instead of just follow web links. |
| |||||||||||||||||
Automated text extraction | Drew: In { config_param_name", string, ... }" should probably be { "config_param_name" : string, ... } to make it valid JSON. Drew: This list should probably only include the config options for boilerpipe, tika, and AlchemyAPI? The others included are feature extractors. The distinction between the two isn't made clear enough, I think, for a new user. Drew: Probably need to include some examples of when to use each engine (common question I get) (e.g. tika is used to process word docs, pdf, office; boilerpipe for web data ). Additionally, examples should show some samples of raw text processed by each engine and the output.
|
| |||||||||||||||||
Manual text transformation | Drew: "Log file from File Share" example is missing the global javascript declaration. Makes it impossible to follow the description below. Alternately, rewrite the description to "After "globals" has been used to define a function called decode (see <globals>), decode is used to capture the metadata for the sample input data into an object call info. Info can then be used in the example that follows:" Drew: Possibly include a sample of the original XML prior to transform via Xpath - example is a little tough to follow without that. |
| |||||||||||||||||
Document metadata | Caleb: What does "Ibid." mean? Caleb: appendTagsToDocs should probably be reworded for clarity to something like "defaults to false, when true appends source tags to extracted documents | DONE |
| ||||||||||||||||
Content metadata | Randy: need to add 'g' flag for grid Randy: How many times do we need to define what each field does? Randy: In process comment needs removed? | Added missing examples for xpath and regex. (05/20/2014) |
| ||||||||||||||||
Manual entities |
| ||||||||||||||||||
Manual association of entities |
| ||||||||||||||||||
Document storage settings |
|
| |||||||||||||||||
Feature extraction | Drew: "This toolkit element passes the document text" should probably read "document full text" Drew: The warning here should point to a section in automated text extraction page that explains which feature engines need a text extractor (and which work best for which problems) Drew: As with my comment on automated text extraction, I'd move all of the feature engine blocks to here. As it is, this page is very sparse and doesn't really reflect what should be here
|
| |||||||||||||||||
Aliasing | Not supported |
| |||||||||||||||||
Harvest control settings | Require more examples for the following:
|
| |||||||||||||||||
Search index settings | More examples in the source for searchIndex parameters would be beneficial.
|
| |||||||||||||||||
Lookup tables | I tried to edit an existing example from the old source, as I could not find any new examples. Please verify the changes I made to the example source and scripts.
|
| |||||||||||||||||
Javascript globals |
| ||||||||||||||||||
Logstash extractor | Would it be possible to have some logstash examples? |
...