One of the less used features of Logscape, is the groovy script action when alerting. Despite how powerful this feature is, it’s often left by the wayside. Today we’re going to walk through using the groovy event to log the alerts to a channel on your slack server. Continue reading
In this blog post we’re going to be looking at what some people might call “big” data. No that doesn’t mean big in the conventional sense, it means big in the sense that the single file dataset is 10 Gb in size, and I wanted to make a “big data” pun.
The data in question is a record of NYC’s 311 complaints since 2010, the 6th most popular dataset on the opendata website. “311” is a complaints hotline in NYC, for those interested in following along or investigating the data themselves, it is freely available from the open data website.
Today we’re going to cover
- Creating a data source and importing the data
- First look at the data to determine interesting fields
- Some basic visualisations of the data.
In my ever onward quest to show to the world how easy it is to get up and started with Logscape, today I’m going to use a Logscape docker container in order to build visualisations based off some publicly available CSV files in no time at all. If you’ve never used the Logscape docker image, then check out my previous blog.
Here at Logscape it should go without saying that monitoring is sort of a big deal. Some would even go as far as to say it’s even our “thing”. To go with that we’ve collated a collection, of what we think might be the best 10 monitoring talks people should watch. Regardless of whether you’re looking to implement a logging tool, build your own or are just a developer, these talks are worth the time.
It’s finally that day, Logscape is now on docker hub. As such I’m going to be walking you through the process of getting Logscape running, and once you’ve got the hang of it, you’ll be able to download, run and start using Logscape all within 60 seconds. Monitoring in a heart beat. Continue reading
Logscape version 3.2 is now available for public download, you can get it now from the Logscape Website.
A brief rundown of Logscape 3.2 brings with it, and what we’re going to cover today…
- File Explorer
- JSON Support (Including JSON Arrays)
- Failover Overhaul
- Performance and Stability Changes
Logscape Analytics’ are incredibly powerful, however, are you using them to their full potential? In this blog post we’re going to go over some of the less used analytics, show you how to use them, and hopefully inspire you to use your Logscape instance in new and exciting ways. So, without further ado let’s get into some searches. Continue reading
Recently we’ve been working on creating new learning materials for the release of Logscape 3.0.Materials appropriate for both the Logscape expert and an individual just picking Logscape up for the first time. The first person to be addressed by this was of cof course the beginner, as such here’s a 10 minute introduction to the basics of Logscape 3.0.
Hopefully this help some of our newer users, and keep an eye out for more advanced tutorials!
So you have written an app or log – it’s brilliant, it grabs all the data you need and runs like greased lightning. All you need to do now is ensure your output file has a nice clean format – preferably one that means Logscape does all the work for you! So here are some of my top tips.
1) Add a full time stamp to every line. You wouldn’t believe how much trouble can be caused by people using just times or dates. At the best, you have to struggle to get your data properly organised. At worst, you end up with a mess and data appears in the wrong place on the graph. Do it right, set the date and time!
2) Add a time zone to that stamp. My computer will never move time-zone, surely it’ll be fine? Don’t count on it. British Summer Time changing the system time on half your servers, servers being reset to US time, data centres moving locations… All these things can and will happen. Adding the time zone to the stamp gives you a cast iron assurance that the data will always be correct. That peace of mind is worth a few bytes.
Docker 1.5 came out a few weeks ago and with it the new stats api arrived. Before 1.5 there was no standard way to collect the metrics of running docker containers without writing custom scripts to parse files stored in the proc memory file system.