Intelligent Log Analysis – Field Discovery

Field discovery..

.. is cool because it does most of the hard-work for you. It finds system metrics, emails, ipAddress and all sorts of things that you never really realised were filling up your logs. Log analysis has never been so powerful J. Its nice that you can add data, click on Search and see stuff. Log analysis tools keep getting smarter and smarter.

Logscape 2.1 builds on the already popular auto-field discovery by providing users with the ability to add their own, ‘auto-patterns’. The system is called grokIt. Im going to discuss the two approaches and how they work within Logscape.


  • Auto-Field discovery (Key-Value pairs)
  • GrokIt Pattern based discovery (Well known patterns)

Automatic Log Analysis of Key-Value pairs

With 2.0 we launched Key-Value pattern extraction. The idea is simple, whenever a recognised Key-Value pattern is found we index the pair and make them searchable terms.

For example:    CPU:99 hostname:travisio 

OR      { “user”:”john barness”,”ip:”″,”action”:”login” }

Pattern based extraction (GrokIt):

With this release we have included the ability to extract known patterns such as, email-addresses, hostnames, log-levels, paths, etc. So every time is seen, then the data is extracted and indexed against the key (_email). The standard config file is logscape/downloads/

#field-name, substring match(leave blank if unavailable), and regular expression matchers that extract a single group for the value

Each of these patterns were considered to be the most practical in terms of a) – seeing useful information or b) – slicing your data by time (hour of day).

Each entry contains the FieldName (lhs) : Expression (rhs).
The regular expression must return a group that contains the value (see the orange brackets above). At the bottom we reference some of the awesome regular expression tools we used for these.

How do I configure it?

To make changes you can add or remove entries. Open your favourite text editor (vim?) – make the changes and save it (make sure you test it) . Once saved, then upload the file via the deployments page where the file is replicated to all agents on the network.

Any new files being monitored will pick up the configuration change (note: it wont happen mid-point through a file). To have the change applied retrospectively you will need to re-index the Datasource.

When is it applied?

As with anything, we have tried to make both discovery systems as fast as possible. Key-Value extraction can perform at a rate of 17-20MB/s per pattern, unfortunately the supported 8 different rules cumulatively slow things down. GrokIt – or regular expression parsing is about 14MB/s per compiled pattern. Again this is too slow; as you will see from above, there are 8 of them.

IndexTime: The easiest way to remove the performance penalty is to do the work once, and not when the user is waiting. In our case, when either of the discovery systems are enabled, a Field Database is used to store the data in its most efficient form (dictionary oriented maps). This decouples the processing and provides reasonable search performance on attributes that are unlikely to change.

SearchTime: At search time the executor will pull in any discovered fields and make them available for that event. This provides decent performance and better system scalability.

Configurable by the DataSource

To allow better performance, we have exposed FieldDiscovery flags on the DataSource/Advanced tab. Standard logscape sources have discovery disabled.


Some great regular expression tools:

Regards Neil