The never ending dilemma of IT and business; the easier we make things, the more complex they get. Almost every enterprise I talk with today is going through some type of transformation from historical compute solutions (running it in “my” data center) to a hybrid environment. Now, the definition of hybrid will vary wildly depending on what enterprise you talk to, what level of risk they can accept, or what level of agility they need to develop to continually enable the business.
Some consider hybrid as part on-premise computing and part hosted or SaaS; some would say critical computing remains on-premise (e.g. legacy systems) and new applications should be placed in the public cloud, while still others are completely redefining IT strategy around a services model – assuming that at the end of the day what’s important is that you deliver the right application, at the right time, from the most appropriate resource.
Hybrid – its a wonderful catchall word, and all these solutions are correct – depending on your business needs, IT’s skills, and the culture of your company.
The reasons behind this transformation to a hybrid world, or as we called it in another post, the Enterprise Defined Data Center are many and varied, from cost savings to agility to a desire to offload the constant upkeep of data center facilities. But as we move more and more towards an EDDC the long term organizational implications on IT are beginning to show. No matter what service or delivery mechanism IT uses to support business process, the fundamental need remains the same – we remain responsible for the end user experience.
If performance is poor – it’s IT’s problem. If costs are escalating – it’s IT’s problem. If SLA’s are suffering – it’s IT’s problem. But the real problem IT is discovering is that there are very few tools available to help them visualize the complete end-to-end process, regardless of where the resources being utilized reside or who owns them. Never mind the secondary fact that many IT people just don’t look at problems end-to-end, they tend to look at them by technology stack. IT managers find themselves with many drill down experts, but with few people who can see horizontally across technology silo’s and can understand how the pieces all flow together. In these organizations problem determination often involves putting together Tiger Teams of experts in each stack, and letting them figure out the problem – at best an incredibly expensive and inefficient means of solving problems.
So imagine if you will, having the ability to visualize a complete business process, beginning to end, and to see what components it uses, what performance levels (or issues) they have, what KPI’s or SLA’s it is hitting, and being able to map all the pieces into a coherent flow. I had a recent conversation with a CTO who articulated the problem like this:
1. Where is my application? I need to see all my mission critical apps (or even ALL my apps) across all the infrastructure installed in all my data centers, and in the hybrid cloud.
2. What about business process? I need to see high level business services, and applications that make them up, and the infrastructure they are implemented on.
3. What is happening with service performance and availability? I need to be able to visualize system faults, performance metrics, capacity issues, delays, security and compliance gaps – wherever they are.
4. How can my staff figure this out? I need my staff to be able to see the end-to-end connectivity and have the tools to quickly identify bottlenecks or issues that are impacting the end user experience.
There are a few tools today that are beginning to pull these pieces together, but not many. For lack of a better term I’m calling them Enterprise Workload Monitoring (EWM) tools and I believe as we build more and more complex hybrid environments these tools will become critical to the success of IT.
Food for thought. Feedback welcome.