The cost of analyzing log files and operational data in many companies is starting to add up. Lifting and shifting data around is expensive anyway, and per-gigabyte vendor fees are making it even more expensive.
In the past 6 months, we’ve heard from a lot of companies who’ve placed hard limits on the amount of operational data they’re willing to collect and index in their centralized log management service. Many are actively enforcing an artificial “data ceiling” to make sure that only data that’s pre-defined as highly valuable gets indexed, and everything else gets ignored.
This makes a lot of sense as a cost-control mechanism. Not all data is created equal, and some data is always going to be more valuable than other data. But it goes without saying that – as long as you have a good way to analyze it – the more data you have, the deeper and more complete the picture you can get about what’s going on. Cutting apparently lower value data out of the picture may cut your costs, but it also cuts away at your valuable insights.
With the launch of Logscape 2.0, we’re inviting companies everywhere to expand into a more holistic approach to log file analysis and operational analytics. We’re helping our customers break the data ceiling with cost-effective and massively scalable analytics for high-value AND lower value data using localized and centralized log management. Oh, and we’re also helping them index unlimited data volumes free, so they can get up and running quickly and scale over time to analyze ALL of their operational data, not just what they can afford to collect.
So from the team at Logscape, we hope you enjoy the new release – give it a spin and let us know what you think.
We’ll be sharing more stories with you about how our customers are getting deeper insights from implementing more holistic, cost-effective and massively scalable operational analytics very soon.