Advanced data analytics and use-cases in Logscape

self_descriptionLogscape Analytics’ are incredibly powerful, however, are you using them to their full potential? In this blog post we’re going to go over some of the less used analytics, show you how to use them, and hopefully inspire you to use your Logscape instance in new and exciting ways. So, without further ado let’s get into some searches. Continue reading

3.03 is here (and now)

Performance :
For this release we carried out more work around execution performance.
Single threaded benchmarking takes 2 profiles. Search page and Workspace oriented execution. When a Search is executed from the Search page it builds the facets stats to support adhoc analysis; it also streams a large set of events to the JettyWebServer. All the extra work yields about a 40% overhead, and we were seeing about 80k events per second for a single thread (30-40 discovered fields). The Workspace execution plan yields 120k per second, per thread.
The execution plan follows these steps:
1. Identify log files in the selected time period and meet the system field criteria (i.e. _agent, _type, _tag etc)
2. Select the time-series buckets associated with each resource
3. Scan the time-series buckets and build data-type patterns, synthetics and discovered fields for each event. (Using indexed fields is much faster that synthetics – 3.03 enhancement)
4. Aggregate and pump data using map-reduce execution of the functions(avg, count etc) (3.03)
5. Jetty Aggregate the incoming streams and drive the interface using websockets
6. Websocket events then send status messages, notification of replay-events (3.03), facets and updated histogram data.
Note I: 3.03 – marks where performance improvements were made.
Note II: A single thread processing 100,000 events is sustainable, 16 threads should process an equivalent of 1,600,000 events per second (in theory). Scalability depends upon I/O subsystem performance relating to disk-io, os-buffers and network.
Note III: Logfile processing is carried out with 1-thread per request.
Important: Before upgrading remember to: 1) backup your config, 2) backup the downloads and space folders (in case of reversion). 3) make sure all agents are online!
Release notes:
1. Fix summation problem where only the first event was evaluated
2. Further performance improvements on search performance and UI interaction
3. Ability to index any  field; discovered or synth (yields faster performance and requires reindexing)
4. Improved data types page for debugging and benchmarking
5. Datasources now use natural keys instead of UUIDs. This should combat DS duplication when importing.exporting. Note: ids only generated on new DS’s being saved
6. You can set the in ( sets it to work/tmp by default. The directory is cleared on rebooting logscape. When upgrading you will need to perform this operation manually.
7.  Networking now uses faster lz4 compression. This will make offline agents break if not updated!
8. Geo-maps now use a chloropleth palette
9. Workspace linking now forces correct filtering when driven via URL clicks
10.  Fixed random hs_err crashing caused by ChronicleQ fixed
11. Search page chain.button now saves state and auto-runs search when auto-run is enabled
12. Rickshaw charts now format numbers with ‘,’ on mouse-over
13. Syslog no longer prints to stdout

Quick and Dirty Logscape sizing guide

One of the first questions our customers hit us with is:

‘How much data can my server handle?’

I hate to be difficult and nothing is more frustrating than answering a question with a question but this is often the case with a data centric solution!

> Q: What is your server spec? A: It can handle X volume per day.
> Q: What are your data volumes? A: You need this server setup

Continue reading

Realtime WebSocket streaming from the cloud to you: Part II


I’ve got the ‘green-light’ and an IP allocated.

In Part 1 I built a Groovy WebSocket
Server  and a Java and HTML Client. In Part 2 I’ll deploy it into AWS, fire up the Clients and add the Github link. With WebSocket Clients, I can run Logscape in the ‘wild’ and make use of the Alert-Feed WebSocket functionality to stream data to my local servers.

AWS Deployment: Before running on the AWS server I need to find the right AMI – one with Java installed. The OpenJDK is installed on most Linux flavours, and I prefer to work with Ubuntu. In the following grab you can see where I’ve fired up the AMI instance.

Continue reading

Realtime WebSocket streaming from the cloud to you: Part I

This is a 2 part post where iwebSocketClientn Part 1 I build the ‘spike’ using Groovy to run a WebSocketServer to stream data to HTML5-WebSocket & JavaWebSocket Clients. The HTML Client uses the elegant smoothie charts (great for streaming). In Part 2 Ill show you how to run it on Amazons AWS.
At the end we have a real-time feed plotting the data from the cloud; it looks something like the grab on the right.

Logscape 2.4 rolls out the door

A quick note to give everyone a heads-up that 2.4 is available for your viewing pleasure (pun intended). [release notes]

Notable things

  • Workspaces UI niceties

    We ran user labs to find out who stumbled where and on what. There are a long list of tweaks so I suggest you view the workspace video to see whats changed (link on the workspace page). A quick preview on Search Panel editing (video):

  • Continue reading

Logscape 2.3 is now available!

After a long test, fix and thrashing cycle 2.3 is finally available!

Head over to the to request the download.

To upgrade you can follow the standard upgrade procedure as described here:

After that, the following installation tasks can be followed:

  • Install the new  Home page. Upload the new logscape/downloads/logscape-home.config and click deploy
  • Install the updated Logscape Monitoring app has a couple of minor updates.. Upload the new logscape/downloads/logscape-audit.config and click deploy
  • If you wish edit your existing DataSources to include the new System-Fields – edit your datasource and choose the ‘System-Time-Series’ checkbox to include ‘DayOfWeek, Date etc’

The release notes can be found as the usual place.

Best Regards,


Logscape Upgrade Guide for Version 2.2

As part of 2.2 we have developed the Logscape Monitoring App and rolled into it the default DataSources (plus a new one). We also managed to remove a bunch of redundant datasources and tidy things up. We have now carried out the upgrade on several environments.

One environment has 120 Indexers and performs at about 100,000,000 events per second (wow!).


The following steps are similar to a standard upgrade but with a few quirks. Have a read of the following page.
Continue reading

Logscape 2.2 is now available for download

See the release notes for ticket information

LogManagement Dashboards just got more flexible 😉

We have added several UI niceties and also included a packaged Logscape-monitor app (included in the downloads folder). I’ll walk you through each of these.

1. Workspace level filtering

How many times have you looked at a Workspace (dashboard) and thought – ‘wow – there are just too many lines and it looks like a pile of spaghetti’. I just want to see this ‘process’ or this servers information. Now you can.

Workspace-level filters are an interactive filter on the chart legend items and table elements. Data is viewed that matches the text entered (multiple entries are supported using a comma). In the example, the WindowsApp we are viewing processes from across several servers- we filter the workspace to restrict data the Wmi processes.



2. Logscape Monitoring App

We have now leveraged the list of auditing built into the system runtime [work/vsaudit,events, audit, logger.stats]. Note, this approach relies exclusively on discovered fields (and was pretty easy to put together).  It includes 5 screens. Data Audit, User Activity, System performance, Alerting as well as the underlying VScape service status.

Note: You can deploy the logscape/downloads/logscape-monitor.config file containing the Workspaces and DataSource configuration on the Deployment page. Upload the config file and click deploy to install and make it available.




Alert: audit-alerts

3. Deploy config files via the deployment page



4. Export configuration elements using a type specifier:

Using Workspace, Search, DataType, DataSource and Alert qualifiers.

i.e. deployment-export