let’s keep this simple: “self-service”, the art of moving work from supplier to customer, tends to reduce costs for a supplier and increase costs for the customer. At the same time it can also sometimes improve service and choice for the customer if the evolved service is re-designed effectively. Not all are.
Every new application that moves to the cloud is another potential application to integrate. As the number and variety of your applications moving to the cloud increase, so your opportunities for integration increase. For master data, that small set of data used across most common and critical business processes and apps, this will be no less complex than it was when those same apps were on-premise.
Another feature of this future that also remains unchanged is the nature of the silo. An application in ‘cloud A’ will have had its own designer and its data model. An application in ‘cloud B’ will have had its own designer and its data model. There is no assumption or expectation that any data standard, let alone master data standard, is supported. Thus as your cloud deployments increase, the number of heterogeneous data stores increases.
Lastly, when all this was on-premise and you had an issue with integration, your simply went down to the third floor and flicked a few ears and slammed your fist on a desk. I paraphrase of course. Now all you can do is send emails and request SLA changes and wait for the supplier to schedule a design review. Sounds like everything is getting easier, doesn’t it?
I saw a recent press release from IBM talking about OpenStack Services (that triggered this blog) though this blog is not about IBM but all the vendors offering self-service integration tools and services. After reading it I figured that this offering appears to be one from many vendors from an ongoing attempt to hand over to the customer the tools needed to handle (among many other things) integration between cloud and on-premise systems (or what is generally called, mistakenly, hybrid). So three thoughts came into my head:
- Integration is a costly and complex cost of doing business. Application and integration vendors did a relatively poor job of making this simple and effective when all our problems were on-premise or even B2B. Even with B2B you could imagine one ‘B’ was as good as any cloud app. The core problem remains, that of semantic differences.
- As cloud-to-cloud and cloud-to-on-premise integration requirements increase in quantity and complexity, what better response than to hand-off, to the customer under the auspices of ‘self-service’, the complex and costly work? This keeps all integration liability out of the SLA for the cloud-app!
- Have the app or integration vendors changed their approach at all, to make integration degrees of magnitude easier? Generally, no.
So there you have it. Self-service integration is coming to a cinema near you. And good luck to you.
After writing this blog I did suffer some remorse. Am I being too harsh with the vendor community? Am I being unfair? Upon reflection I think not. But there are some efforts ‘out there’ that do seek to help simplify some of the integration work:
- Open data: there is a growing source of self-describing (well, I mean pre-published) data with metadata that can help with one-way consumption, if that data is useful. The techniques of open data could be partially adopted by known trading partners perhaps, in a kind of ‘shared data’ model. For example, a trading partner or app publishes its data with its metadata and a collaboration key so that only paid-for or known trading partners or apps can subscribe and access the data and/or metadata – and publish back- changed or response data.
- Integration techniques: to be fair these have gotten easier. They are easier since many techniques are more granular and more accessible to less skilled IT staff. However even though some of the techniques are easier, the larger issue of quality of data being integrated remains largely unchanged. The recent flurry of excitement around “data discovery” is showing some signs for how finding and consuming meaningful data can be simplified, to a degree.
- Standards like OAG: Industry and technical standard are always a good thing, if you can quantify the benefits over the cost of converting or using them. They are not a panacea not a silver bullet. They should be used only when it makes sense. And that’s the problem; when and where does it make sense? Do they make sense to use for all partners or apps, or can you get value from partial use? Given the state of the infrastructure today and what already ‘works’, do standards provide a better way when much current investment is good enough? It’s not an easy question to answer.
So I correct myself. Vendors have helped, somewhat. It is not going to be a total mess in our collective cloud future. Some issues remain intractable. At least we are should be aware of them and perhaps avoid some of the predictable bumps.
Read Complimentary Relevant Research
Cloud Computing Primer for 2018
Cloud is evolving from a market disruptor to an expected approach for traditional and next-generation IT. Our research offers actionable...
View Relevant Webinars
Monetizing Public Cloud Opportunities in China, Russia and India
All IT markets are directly or indirectly disrupted by Cloud Services. Gartner predicts that through 2020 the Cloud shift will grow to...
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.