Prior to Gartner, I was part of several start-ups, of which a couple were acquired. As a result I have worked in very large environments. I’ve always had a focus on infrastructure and applications, and enjoyed both security and troubleshooting aspects. It was always my goal to learn as much as possible and diversify my skills. When something new came along, as Splunk did, I was instantly interested in how we could use it to resolve issues faster and troubleshoot effectively allowing for collaboration in environments where we tried to keep things secure between groups. Logs are incredibly useful to subject matter experts, whether they be developers, network engineers, server admins, virtualization admins, or storage experts. I was a Splunk customer 4 times over, and always found value in the tools.
One challenge with the product is data volumes are always increasing, and the amount of money spent on Splunk was following suit. The licensing model is consumption based, and costs increased with data ingest unlike most software IT Operations teams deal with. This often rubs customers the wrong way since they are used to buying hardware with this in mind, but not software. The cloud is hopefully changing perception, but as of today it remains an issue.
I had attended the user conference in 2011 (two years ago) and it was interesting to see all of the cool use cases people had come up with for this very open ended technology. With around 500 people it was a good turnout for a growing provider. Fast forward 2 years, and Splunk has grown considerably, gone public, and become an even hotter company. With about 1,600 people turning out for the conference there were far too many sessions to attend! I did catch some good customer sessions on how they were using Splunk, and they were useful, but most of them not too different than my day to day interactions with Gartner clients. I did hear some more innovative stories, but by and large people were using it for the same things I was when I first purchased the product in 2007. There has been considerable progress since then in the product and capabilities, let’s dig into some of those announced at .conf
The major announcements were around Splunk 6.0:
- Data Models – Splunk has always been easy to use, but you had to be a little technical to grasp the powerful search language. The data model concept is taking unstructured data and putting simple search syntax to create dimensions that can then be related to one another. What this means is that you can query the data in a relational way by using Excel type functionality such as Pivot tables that are built into the product. This makes the tool more approachable to non-technical users.
- Performance – In order to scale and handle the impact data models can have on performance and search they have considerably sped up search, as well as allowing users to accelerate parts of the product without making some of the more difficult decisions which used to have to be made. While some of the speed boosts require more disk space (see item 3), the performance is helpful
- Cloud – This one is the most interesting to my coverage, but as I dug into it more it wasn’t really cloud, but it does help enable customers (which is a good thing). This Cloud offering is actually a managed services offering run by Splunk on AWS. They still go through and spec and implement an environment for each customer based on their requirements, so there are services involved. The nice part about the offering is that Amazon has very low storage costs (especially since you can leverage S3 for low cost, and Glacier for even lower cost), and often with high retention requirements the cost of infrastructure can sometimes be higher than the cost of the Splunk licenses. The other interesting bit is that it offers the option to connect with on-premise Splunk deployments for universal visibility across all types of application and infrastructure.
Splunk is focusing on customer enablement, and fighting the hard battle of getting customers to use their data to create a clear and visible ROI. This means building special use cases for each customer that fits the business demands. This creates benefits for customers that were not originally realized when the product was implemented. Additionally it helps stickiness with the product which is under increasing pricing pressure due to the high cost of consumption based licensing.
Read Complimentary Relevant Research
Organizing for Big Data Through Better Process and Governance
With big data past the Peak of Inflated Expectations on the Hype Cycle, organizations are addressing next-level challenges and asking,...
View Relevant Webinars
Internet of Things: Biggest Impact Ever on Information and Master Data
Few IT leaders acknowledge the challenges of distilling data generated by billions of devices into business-relevant insights and economic...
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.