Kyle Hilgendorf

A member of the Gartner Blog Network

Kyle Hilgendorf
Research Director
3 years with Gartner
13 years in IT industry

Kyle Hilgendorf works as a Research Director in Gartner for Technology Professionals (GTP). He covers public cloud computing and hybrid cloud computing. Areas of focus include cloud computing technology, providers, IaaS, SaaS, managed hosting, and colocation. He brings 10 years of enterprise IT operations and architecture experience. Read Full Bio

Further Q&A from AWS vs Azure Webinar

by Kyle Hilgendorf  |  September 25, 2014  |  4 Comments

Last week I conducted a Gartner webinar comparing Amazon Web Services against Microsoft Azure.  You can watch the replay here.

There were an incredible number of questions that came in electronically during the webinar and I was unable to get to all of them in time.  Here are some additional questions submitted with a brief answer from me.  I have purposely not edited the submitted questions.  These are not all of the questions.  If I have time I will address more in a second blog post.

Finally, several people asked why they could not access Gartner’s “Evaluation Criteria for Cloud IaaS Providers” or the In-Depth Assessments for AWS or Azure.  Gartner has several subscription models and these documents are only accessible to Gartner clients that have access to “Gartner for Technical Professionals”.  If you do not have access and would like to discuss this, please contact your Gartner Account Executive.

———————-

Q: Isn’t it more realistic to think that we’re likley to end up using BOTH AWS and Azure due to the pervasiveness of Microsoft across the enterprise (Exchange, Sharepoint) even in shops that are not .NET shops?

I think it’s very possible that large organizations will find themselves using multiple cloud providers at the same layer (e.g. IaaS, PaaS) in the future.  Today, only a minority of my clients are using AWS and Auzre simultaneously but I think the rationale for this will increase over time for most organizations.  It might not be AWS and Azure but I see a day were large organization prefer to have 2-4 major IaaS/PaaS providers in place in order to deploy workloads to a best of breed scenario.  If your organization heads down this path, please do not underestimate the work involved in managing multiple provider relationships nor the effort involved in managing assets simultaneously at multiple providers.  This will impact your processes, your people and your integration points.

Q: Linux Virtual machines / Linux platform support on Azure?

 What is your feel on Azure’s commitment to Linux as a platform in general , 
I see very little technical documentation, videos, teched/ azure friday, Microsoft virtual academy resources on Linux support in azure.
 Every thing i see read and experiment is windows and windows only 
Given that how can we even roll some thing enterprise class based on a linux platform with no hand holding committment from Microsoft Azure ?

First and foremost, I do not represent any vendor, including Microsoft.  With that being said, every indication to me from Microsoft has been that they are committed to supporting both Linux and open source workloads.  Azure already supports a variety of Linux distributions as well as SUSE Linux Enterprise Server.  A relationship has not been worked out with Red Hat yet for RHEL, and I understand that can be a sore spot for many organizations.  The question I can’t answer is the degree of ease and automation that Microsoft will offer for non-Microsoft workloads.  We already know that Microsoft has put forth great effort to automate and orchestrate the provisioning of complex Microsoft stacks atop Azure like SharePoint through PowerShell scripts and cmdlets.  Time will tell as to whether Microsoft will do this for Linux or whether a community of Azure users will take this upon themselves.  Every indication I have from Microsoft however is that Microsoft will continue to push forward with more non-Microsoft software and platform support, especially Linux.  I encourage you to discuss this with your Microsoft account manager and ask for a private conversation with Azure leadership.

Q: Is data sovereignty an issue that you come across and how do these providers deal with that?

Data sovereignty is an issue that comes up in client conversations, but not as much as it used to.  In the webinar I talked about Local and Global availability and the differences between AWS and Azure.  Both providers have protections in place whereby you as the customer can be assured that data does not cross country or geographic boundaries.  AWS gets a slight advantage here because if you want to increase availability at certain levels with Azure you have to setup datacenters pairs – potentially across country boundaries (e.g. Ireland + Amsterdam or Brazil + U.S.).  However, at the same time, you can choose not to do this with Azure if you prefer to keep your data within a single location.  Data sovereignty issues still come up for clients that have specific country requirements like Germany, France or Canada – locations where these providers do not yet have datacenters.

Q: Does the AWS custom tagging allow you to create multiple groups of assets using different permutations of tags i.e a server can live in multiple overlapping groups e.g. a billing group vs an application group vs a project group

Good question.  You can assign multiple tags to AWS assets.  For example, you could setup tags for department number, billing group, project name, etc.  Furthermore, you can have each of these tags pass through into your bill and then use a custom reporting tool (most use Excel Pivot Tables) to sort and filter based on whatever combination of tags you prefer.  Check out this documentation: http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html

Q: How do you compare the storage offerings from Azure to AWS.
The non-existent Virtual block storage as an offering in Azure is a big issue to run realworld database work loads.
The fact that BLOBs, Queues, Tables are not BLOCK storage based but Object storage based is huge problem for any enterprise workloads.
Currently Azure supports only 500 IOPS at 512-2K chunk size
They are working on a PIOPS model but the IOPS is not a enterprise level.

What are your views on this ?

My colleague Angelina Troy is the resident expert on this.  She has published an in depth comparison of public cloud storage services from AWS, Azure and Google.  Gartner for Technical Professional clients can access that document here.

Q:I have internal requirements from Federal Government, What provider can be a better option for a Government?

This is highly dependent upon your specific federal requirements – they vary quite a bit.  AWS does offer GovCloud, which is a unique ITAR-compliant region just for agencies or contractors that meet certain specifications.  Microsoft does not yet have a government-only region of Azure in general availability.  We do know of some government entities using Azure though despite this.  If you are a federal agency and interested in a federal cloud from Azure, I encourage you to discuss the options with your Microsoft representative for private preview/beta.

Q: Does azure have the same concepts of pay-by-the -minute, spot or bid pricing?

Azure does offer per-minute billing but does not have an auction style model like AWS’ spot pricing.

Q: Regarding: AWS can scale applications on demand.

Can this feature be used to lower the pricing during idle times by a lot?

I’d need more specific context about the application design – but in theory the answer is yes.  Let’s say that your application is horizontally scaled out to 10 web servers during the day to handle load.  If you setup Auto Scaling, Elastic Load Balancing and CloudWatch monitoring appropriately, you could start to scale those 10 web servers down to 8, 6, 5, 2 or 1 web server during the evenings/weekends and then back up when you need them.  Considering that at least 6-8 hours of each day could be a trough for your application, you might be able to realize up to 30-40% savings over a static design.  There are a lot of intricacies into how you setup such an application, but there is plenty in the industry published about “cloud-native” designs or “scale-out” and “scale-in” designs.

Q: I have not heard good things about Microsoft support, even when a customer has Premier Support. Is Amazon even worse?

When you have the number of client conversations over a 3+ year period that I’ve had, you hear many positive and negative comments on just about every vendor’s support.  With that said, I have heard mostly positive support on both Azure and AWS support plans.  The one benefit to Premier Support that I highlighted in the webinar is that Microsoft Premier Support will not stop at only helping with “cloud issues”.  If you are running Windows Server and a .NET application atop Azure, Premier Support will assist you with Windows Server or .NET issues as well as Azure issues.  That is a nice touch for a complete Microsoft stack.  AWS will offer best effort support for things like Windows Server – but they are not Microsoft – they can’t be expected to provide intimate support for Windows Server.  With all this said, given enough time you will probably eventually have a bad experience with every support organization.  Any of us that have called our home broadband or cable/satellite TV support could attest to this!  However, I’ve never sensed a trend that says Azure or AWS support is bad.

Q: Whats a strong platform that integrates well with Azure/AWS that enables development in Azure/AWS, while abstracting out the dependencies, providing portability?

Great question.  This could mean several things so I may have to guess at the intent to the question.  It sounds like you are interested in adopting an abstracted development platform that allows you to develop against either AWS or Azure and port back and forth as needed.  This idea is mostly a fantasy utopia at the present time, but certain things are intriguing.  If you are interested in abstracted APIs that translate to multiple cloud providers you might want to look at Apache jclouds, Apache Libcloud or Dasein.  Just pay careful attention about the support for or lack thereof of various providers.  If you are looking for a broker that helps you manage both providers through the same process, check out RightScale, CSC Agility Platform or Dell Cloud Manager.  But we have to remember that these are very different services architected differently.  If you want to deploy a database schema to AWS RDS and then port that to Azure SQL – you’ll need to do some manual work.  If you want to deploy to Azure Load Balanacing (ALB) and then move to AWS Elastic Load Balancing (ELB) – you’ll find nuances and differences between the designs.  Therefore, portability is far from a reality and I doubt we’ll ever really see it.  A day where these services are identical means that differentiation has died.  And competitive capitalism will not let that happen.

My advice to clients interested in this is to avoid proprietary features whenever possible.  For example, instead of choosing DynamoDB or Azure Tables for a NoSQL database, rather opt for a virtual machine with MongoDB, Redis, or CouchDB atop of it.  In that scenario, you can always redeploy a VM with the other provider, reinstall the database and migrate the data.  Or rather than ALB or ELB for load balancing consider HAProxy or a virtual appliance load balancer that is supported at multiple providers.  However, provider’s proprietary services are popular because they are cheap and easy to integrate with other services at that provider.  So that tug and pull will aways be there.

Q: Which provider is more flexible in terms providing scalability? AWS has certain restrictions like no vertical scalability , can’t increase individual components like vCPU or RAM of an instance , also block storage volume can’t be extended beyond 1 TB. What are your inputs on this ?

My webinar highlighted that AWS is the choice if you need the highest levels of scalability.  That is not to disparage Azure, its simply supported by multiple data points from our Evaluation Criteria research.  AWS can increase individual components like vCPU and RAM in a vertical scalability concept – its simply a bit different than their horizontal scalability.  You can shut down an EC2 instance and resize it (e.g. M3.medium to M3.large) as long as the processor architecture and OS type (e.g. 64-bit) is the same.  I find a lot of AWS customers don’t know this, but it is possible to achieve.  Just be careful when moving from micro or small because those often are 32bit or single processor architectures.

It is true that EBS has a 1 TB limit.  However, Azure’s block blob size limit is currently 200GB.  So if you are in need of block volumes larger than 1TB, I think you are out of luck right now with either provider unless you contact the provider and ask if something special can be arranged for you (sometimes it can).

4 Comments »

Category: AWS Cloud Evaluation IaaS Microsoft Providers     Tags: ,

VMworld 2014 – My Impressions of Cloud Announcements

by Kyle Hilgendorf  |  August 28, 2014  |  1 Comment

VMworld 2014 took place in San Francisco, CA this week, with ~22,000 attendees descending for the annual event that showcases VMware’s newest announcements and product/service advancements.  I attended once again to pay particular attention to VMware’s cloud movements.  Here are a few of my impressions.

vCloud Air

Prior to VMworld, VMware announced a rebranding of vCloud Hybrid Service (VMware’s public cloud offering) to vCloud Air.  A name is just a name, so I don’t really mind this change and in fact I think it is a simpler brand name.  And simple is good.

Most important for vCloud Air is the future direction.  vCloud Air is now 1 year old and Gartner clients routinely tell me that in its current state it does not stack up well in feature set to other public CSPs, namely Amazon Web Services or Microsoft Azure.  However, there continues to be incredible interest in the future of vCloud Air from the loyal VMware customer base.  But where is it going and how fast?

VMware announced several major service expansions including an on-demand pricing model, an object storage service, a database as a service (DBaaS) offering, and a relationship with AT&T for NetBond connectivity into vCloud Air.  There were many more announcements surrounding vCloud Air.

VMware leadership informed me that they don’t intend to get into a feature-by-feature war with other major cloud providers and will rather focus on use case differentiation.   DRaaS and DaaS were mentioned multiple times in multiple venues as examples of use case differentiation.

I believe vCloud Air will actually have to do both – compete on feature and differentiate on use cases.  Right now features are king in the Iaas and PaaS markets and customers will have decreasing tolerance for a non-competitive feature set.  However, I think the innovation engine within vCloud Air is starting to move and the next 12 months will be fascinating.  I also suspect VMware might have some tricks up their sleeves and look to differentiate in non-technical features that relate well to large enterprise with significant VMware investments.

Each of the services mentioned above are now baseline and mandatory features that all major clouds must have – so in many ways, vCloud Air is still far behind.  Furthermore, these announcement services are just now in beta and will not move into GA for another quarter or two.  Unfortunately, there were no announcements around pricing of these services and VMware will need to be careful here to not price themselves out of fierce competition.

Enterprises are quickly moving from tactical to strategic selections of IaaS and PaaS providers and these decisions are often made on current features and future roadmap.  It’s not too late for vCloud Air but the clock ticks fast in this market.  VMware will need to aggressively move these services from beta to GA and expand into other important features such as auto scaling, advanced auditing/logging and identity and access management.

vRealize Air Automation

Another common complaint I hear from customers that evaluate vCloud Air is the fact that significant sets of features are only available if you are running the on-premises vCloud Suite –  vCloud Automation Center (vCAC) and vCenter Operations Manager (vCOps) being the two significant packages.  Unfortunately a lot of the VMware customers that would benefit from vCloud Air are not paying the hefty license fees to operate and run vCAC and vCOps.  Therefore, when these customers evaluate vCloud Air, the automation, management and monitoring functions of the native vCloud Air interface is less than impressive.

VMware announced that their management suite is now rebranded to “vRealize” and a new SaaS-based version of the suite would be rolled out named vRealize Air Automation.  This is a big deal for future vCloud Air adoption because it no longer means that customers must run vCAC or other components internally – which limited a lot of the vCloud Air use to very large VMware customers.  Those same very large VMware customers also tend to have robust environments and large datacenters – thereby potentially not yet needing vCloud Air.  A SaaS-based solution for the vRealize Suite will open the door to many new vCloud Air customers – namely those without the on-premises tools but also will allow VMware to innovate on a feature set roadmap faster than they can do in a shipping product with major and minor version releases.

I expect the vRealize Air Automation solution to start to look and function more similar to the management consoles of AWS, Azure, Google Cloud Platform or IBM Softlayer.  And it will also likely go much further and start to compete with popular SaaS-based Cloud Management Brokers like, CSC ServiceMesh, Dell Cloud Manager or Rightscale.  According to customers, VMware is a very good management company and management is one of the stickiest parts to continue to leverage VMware technologies.

We do not yet know about the pricing of vRealize Air Automation, nor what the current feature set / roadmap looks like.  This is slightly disappointing but not all that unexpected from typical major conference announcements where the announcement generates buzz and the details filter out afterward.  I will be paying very close attention to this because I believe management is the emerging ugly issue in public cloud services.

VMware Integrated OpenStack (VIO)

One of the lighter announcements on details is also one that I am most intrigued about.  VMware announced VIO, essentially a VMware-based distribution for OpenStack.  Right now, VIO basically just exposes OpenStack APIs into VMware infrastructure.  But it holds longer-term promise.  OpenStack is still plagued by installation problems, vendor support and management.  But when you consider the large VMware install base within organizations, with a lot of untapped capacity, organizations may want a shortcut to convert some of that into an OpenStack cloud.  This is where VMware could do quite well.  VMware might be able to deliver this simplicity, but in its current iteration, I think it is still far from that.  More importantly, I believe, is the potential for VMware to bring great management to an OpenStack solution.

Some industry experts want you to believe that you don’t need to manage a cloud.  That is far from the truth, there is a lot of management necessary, you just manage different “things”.  For example, consider something like auto scaling.  You may need to manage VMs less than in a traditional architecture but you’ll have to manage the auto scaling group, the policies assigned to it and configuration and change of such policies.  If VMware focuses hard on all the difficult management aspects to OpenStack, VIO has legs.

Docker and Kubernetes Collaboration

Although not a new technology, containers are all the rage in 2014 and will continue so in 2015.  The hype in the industry has been that containers will replace VMs and VMware will be severely impacted.  Well VMware counteracted this hype strongly at VMworld with an announcement of Docker and Kubernetes collaboration and contribution.  I’ve always thought that there is room for containers and VMs to live together for the next several years.  I see value in two layers of encapsulation, one at the OS (VM) and one at the app (container) and we cannot ignore the enterprise readiness of VM security and VM management tools.  Container management and security still needs improvement so why not combine the two worlds?

This announcement is a very proactive move by VMware.  The leadership clearly sees the value in containers and might even admit that far into the future VMs could be at risk.  Well if that happens, it now looks like VMware is setup to adjust accordingly.  If container management and Kubernetes functionality is integrated into existing VMware management tools, consider the future vision of managing both VMs and containers from a single pane of glass both in a hybrid (VM and container) world or in a transition (VMs to containers) world.  This is a huge move and perhaps the best of the lot at VMworld.

There were several other fascinating announcements a bit more outside of my core coverage space so I encourage you to digest the press releases.  Gartner clients should then contact the appropriate analyst at Gartner for a more in depth inquiry about what each announcement means for your organization.

What did you think about VMworld 2014?

1 Comment »

Category: Cloud Hybrid IaaS Management OpenStack Providers vCloud VMware     Tags: , , , ,

The Emerging Ugly Issue in the Public Cloud: Enterprise Management

by Kyle Hilgendorf  |  August 6, 2014  |  5 Comments

Cloud surveys almost always cite security concerns as the top issue impacting public cloud adoption.  I am not debating this finding but I feel it is now prudent for me to highlight the emerging ugly item that may pass security as the top issue before too long – Enterprise Management.

Public cloud adoption continues to push forward with almost all of Gartner’s clients.  In the midst of this happening, I now have an increasing number of phone inquiries with clients that claim to be entering “significant adoption”.  This has variable meaning but in essence it means using many providers (SaaS, PaaS and IaaS) and deploying many assets per provider (applications, VMs, storage volumes, accounts, policies, etc).

I now have many calls about how to manage “all of this stuff”.  In small cloud deployments, tiger teams can typically manage the relationships and the assets at those providers.  But as widespread adoption sets in, things quickly spiral out of control.

All parts of ITIL or ITOM tend to impacted.  Clients cite management frustrations such as asset, deployment, configure, change, financial or lifecycle to mention a few.  Creating even more complexity is the fact that no two cloud providers are created equal.  What a client might be able to manage at cloud provider X with native services might not be an option with cloud provider Y.  Or a client might have a plugin for their enterprise management tool to cloud provider A but not cloud provider B.  This creates islands of management where the cloud buyer has no other choice but to create management processes uniquely at each cloud provider or forego professional management altogether.  This may be acceptable with limited deployments or for non-production systems, but move into large scale and for critical systems and this quickly becomes a show stopper.

A market exists for cloud services brokers that offer management capabilities and the industry is seeing improvement.  Furthermore, existing enterprise management tools are expanding their plugins to support more cloud providers.  But, most cloud brokers only support a handful of cloud platforms or providers and the majority of plugins for enterprise management tools only support a subset of functionality at the provider they “support”.  The cloud buyer must then perform a gap analysis of what comes out of the box and what must be built custom to professionally manage operations.

I believe we are approaching a critical state in the next year whereby most cloud buyers will start to experience significant frustrations with cloud management.  These cloud buyers must create centers of excellence within IT to build professional cloud management organizations, and these organizations must get comfortable with having multiple toolsets in play.  The alternative is running out of control in the cloud or not running in the cloud at all.  Gartner provides assistance with the Solution Path for Public Cloud Adoption Maturity Plan.

Next week at Gartner Catalyst I will be presenting a high level assessment of how four major IaaS providers (AWS, Azure, Google and vCHS) do at providing capabilities for Asset Management, Deployment Management and Financial Management.  If you are attending Catalyst in San Diego, please stop in and engage in the session.

What do you find to be your biggest problems in managing cloud providers or deployments within the public cloud?

5 Comments »

Category: Cloud Management     Tags: ,

Cloud Research Positions at Gartner

by Kyle Hilgendorf  |  July 29, 2014  |  2 Comments

Gartner has two open positions covering cloud computing right now and I wanted to entice those of you that are interested to look at the positions, read this blog and if it sounds like a fit – apply.  Do not get hung up on the location of either of these positions.  Gartner is truly a work from anywhere environment.

Virtualization and Private Cloud Analyst

Public and Hybrid Cloud Analyst

I have now been at Gartner 3.5 years and  I am often asked by peers, clients, vendors, colleagues and friends what it’s like working as an analyst at Gartner.  As I reflect on my time at Gartner, here are the things I love most about working here.

  1. The Gartner research community is an incredible thought-leader warehouse that has a small family feel.  On a daily basis I am blown away by the amazing depth of thought and analysis that comes out of the collective Gartner research division.  You would think that among this many intellectual individuals, most of whom have been high performers in previous jobs, would come with an insane amount of ego and competitve natures.  But by in large, the majority of the time, I find quite the opposite.  The team atmosphere and dedication to uncovering the right analysis drives a true value for one another and partnership for the greater goal rather an individual achievements.
  2. Being a Gartner analyst is a truly unique industry position.  Over the course of each of my years, I have had the opportunity to speak intimately with several hundred end user organizations.  These engagements happen daily on phone inquiries and regularly in conference one-on-ones or on-site and face to face visits.  I had a really good perspective prior to Gartner about what the company I worked for needed and wanted.  But now I have a great perspective on what many organizations collectively want and need for business solutions.  This knowledge allows me to think and analyze what the industry actually needs and make recommendations to vendors and providers to help move the industry forward.  I have truly come to cherish my access to each of the Gartner clients that interact with me and the trust they put in me to help advise them strategically and tactically.  Finally, Gartner is objective.  We do not accept any vendor money to sponsor research.  Not once have I ever been influenced or forced to write anything other than what my own research has uncovered.  Gartner takes this very seriously and it is what makes Gartner the best analyst firm in existence.
  3. Gartner offers a great work-life balance.  The majority of Gartner analysts get to work from home.  Working from home is not for everyone and it could get lonely, but personally I love the flexibility it offers for me and my family and the quiet atmosphere that a corporate environment with cubicle farms can never offer.  Furthermore, even though most of us work from home, I very much feel part of a team environment.  We leverage phone and video conference technologies frequently and engage in conversations to keep the interaction alive.  In many ways I feel as much a part of a team working from home as I ever felt working in a cubicle farm.
  4. Gartner is committed to the needs of our clients.  It’s a statement all companies make.  But I have found Gartner to really mean it.  I think a big reason for this is because each of us talk directly with our clients every single day.  It’s easy to remember your focus when we interact with that focus (our clients) routinely.  But Gartner as a company keeps investing in what our clients want also.  A great example are these two open positions.  Cloud Computing continues to be a fast growing and in high demand coverage area for our clients.  Therefore, we are hiring more experts.  These experts will get to work alongside already great cloud colleagues.
  5. Gartner stratifies its research.  Gartner has always been known as the best for CIO and Senior IT Leadership research.  But Gartner has also broadened and invested in other areas of resarch.  I work in the Gartner for Technical Professionals research division, an area of research aimed at senior level technology professionals (e.g., enterprise architects and engineers).  This research division completes a holistic research offering that other analyst firms simply cannot offer.  It also allows us internally to collaborate among analysts that specialize in all levels of an IT organization to deliver timely, accurate and tailored research to each individual in an IT organization and specific to their current role.

A while back, my colleague, Lydia Leong wrote two separate blog entries about working for Gartner that I will link here. I encourage you to also read her insights.

http://blogs.gartner.com/lydia_leong/2013/08/26/five-more-reasons-to-work-at-gartner-with-me/
http://blogs.gartner.com/lydia_leong/2011/12/12/five-reasons-you-should-work-at-gartner-with-me/

Do you love research, analysis and opportunities to expand your insight into IT and the industry as a whole?  Do you have a specific expertise in private, hybrid or public cloud right now?  If so, click the links at the top, apply, and hopefully join our great team!   I look forward to meeting you.  If you would like to engage in a private conversation first, please email me at kyle <dot> hilgendorf <at> gartner.com

2 Comments »

Category: Cloud Gartner     Tags:

AWS moves from ECU to vCPU

by Kyle Hilgendorf  |  April 16, 2014  |  10 Comments

In a quiet move yesterday, Amazon Web Services apparently abandoned their Elastic Compute Unit (ECU) approach to describing and selling EC2 instance types and towards a more traditional vCPU approach. This was done without any formal announcement and I wonder what effect it will have (positively or negatively) on customers.

For existing AWS customers that have grown accustomed to ECU over the course of the past years, this could be a somewhat disruptive change, especially for those at larger scale that have invested a lot of time and money optimizing instance size and horizontal scalability based on their own performance testing and analysis of which kind and how many EC2 instances they need for their use case. Initially, this may not matter much for existing deployments, but it will have an impact on scaling out or for new use cases. Bottom line – these types of customers are pretty savvy and will find ways to adjust.

For new or prospective AWS customers, the ECU was always a gnarly concept to grasp and it took time. More traditional deployments, like those based upon VMware were always declared with vCPU. Bottom line – more traditional IT Ops admins and new AWS customers will likely welcome this move as a move toward familiarity and simplicity.

However, for all customers, there is one aspect of this that could be problematic. AWS is a massive scale cloud provider, with a wide variety mix of servers and processor architectures in existence. Therefore, two instances each with 2 vCPU will not necessarily be equivalent. One instance could reside on top of a 2012-based processor while the other could reside on top of a 2014-based processor. Many people have written about the fact that EC2 processor architecture varies across instance types and across regions, even those described as having the “same specs”. Therefore, some savvy organizations have moved to a “deploy and ditch” strategy whereby they deploy many instances, interrogate them all for processor architecture and then ditch all the ones that are not up to current or fastest specs.

This will further escalate an important transparency event for AWS. AWS will need to clarify the physical processor architecture strategy per instance type or instance family. As a customer, I will want to know which instance types are based on Sandy Bridge processor architectures for example – because that tells me what a vCPU will equate to. I will want to know the processor strategy similarities/differences between an m2 and an m3 or between an m3.medium and m3.large. And if there are no differences – I will want to know that also and have something in writing stating as such.  Customers wanted this before with ECU, but ECU gave AWS a way to deflect these customer questions.

ECU was a foreign concept to grasp initially, but it did provide one benefit – a standard of measure. Now that AWS has moved to a vCPU strategy will customers applaud this or complain? I’d love to hear your thoughts in the comments below.

10 Comments »

Category: AWS Cloud     Tags: , ,

Microsoft joins the Open Compute Project (OCP) – Cloud Transparency

by Kyle Hilgendorf  |  January 28, 2014  |  1 Comment

Today Microsoft announced that it is joining the Open Compute Project by contributing what Microsoft calls the “Microsoft cloud server specification”.  To date, almost all of the major public cloud services have failed to expose the inner workings and configurations of the infrastructure that powers the public cloud service.  At Gartner, we  often advocate (on behalf of our clients) the importance of exposing underlying infrastructure configuration to cloud customers.  In fact, in our research, “Evaluation Criteria for Public Cloud IaaS Providers“, we have a specific set of requirements, stipulating published infrastructure transparency by IaaS providers.

In an ever increasing demand for hybrid cloud architectures, customers really do need some level of insight into the underlying infrastructure configuration, especially in IaaS, in order to assess the risk and compatibility of using the environment.  Furthermore, understanding the relevant details of the configurations impacts migration, compliance, licensing, configuration and performance.

Obviously, providers can go too far and expose too much information to customers which could lead to targeted security attacks.  Gartner is not advocating sharing information such as the location and number of surveillance cameras, the number of trained people on site at any one time, or the security policies configured for IDS/IPS systems.  But what Microsoft is doing today, I believe is striking that right balance.

I believe it also continues to confirm that Microsoft not only is serious about playing in the cloud provider market, but that they are also listening to enterprise requirements and taking obstacles out of the equation.  Customers will now be able to clearly understand the makeup of servers and configurations within Windows Azure, be able to discern local levels of redundancy and availability and make intelligent decisions about when you use Fault Domains or larger protections on availability such as deployments into multiple locations, geographies or additional providers.  At the end of the day, customers want as much information as they can get to make the most informed decisions.

Furthermore, what the Microsoft blog entry does not highlight is the long term benefit that sharing these details offers to large customers and partners to build hybrid clouds and reap its benefits.  As the Microsoft and OCP initiative moves forward, there is no reason why large customers and partners cannot start to deploy the same Azure-like infrastructure internally and ensure compatibility in a hybrid cloud architecture as workloads migrate to the public cloud or back to the internal, private cloud.

I’ll be closely watching this evolution in Microsoft’s strategy and paying attention to how enterprise customers react.  I will also be very curious what (if any) impact this has on AWS.  Microsoft has often emulated moves AWS has made (especially with price cuts) and it will be fascinating to see if AWS responds to this by increasing the transparency of their environment to customers.

What will you be watching for?

 

1 Comment »

Category: Cloud Microsoft     Tags: ,

We’re Hiring Cloud Experts – Why Work for Gartner?

by Kyle Hilgendorf  |  January 23, 2014  |  1 Comment

In just two short weeks I will hit my third anniversary with Gartner.  I am often asked by peers, clients, vendors, colleagues and friends what it’s like working as an analyst at Gartner.  But more importantly, we have two new open positions right now and I wanted to entice those of you that are interested to look at the positions, read this blog and if it sounds like a fit – apply.

Virtualization and Private Cloud Analyst

Public and Hybrid Cloud Analyst

As I reflect on my three years at Gartner, here are the things I love most about working here.

  1. The Gartner research community is an incredible thought-leader warehouse that has a small family feel.  On a daily basis I am blown away by the amazing depth of thought and analysis that comes out of the collective Gartner research division.  You would think that among this many intellectual individuals, most of whom have been high performers in previous jobs, would come with an insane amount of ego and competitve natures.  But by in large, the majority of the time, I find quite the opposite.  The team atmosphere and dedication to uncovering the right analysis drives a true value for one another and partnership for the greater goal rather an individual achievements.
  2. Being a Gartner analyst is a truly unique industry position.  Over the course of each of my years, I have had the opportunity to speak intimately with several hundred end user organizations.  These engagements happen daily on phone inquiries and regularly in conference one-on-ones or on-site and face to face visits.  I had a really good perspective prior to Gartner about what the company I worked for needed and wanted.  But now I have a great perspective on what many organizations collectively want and need for business solutions.  This knowledge allows me to think and analyze what the industry actually needs and make recommendations to vendors and providers to help move the industry forward.  I have truly come to cherish my access to each of the Gartner clients that interact with me and the trust they put in me to help advise them strategically and tactically.  Finally, Gartner is objective.  We do not accept any vendor money to sponsor research.  Not once have I ever been influenced or forced to write anything other than what my own research has uncovered.  Gartner takes this very seriously and it is what makes Gartner the best analyst firm in existence.
  3. Gartner offers a great work-life balance.  The majority of Gartner analysts get to work from home.  Working from home is not for everyone and it could get lonely, but personally I love the flexibility it offers for me and my family and the quiet atmosphere that a corporate environment with cubicle farms can never offer.  Furthermore, even though most of us work from home, I very much feel part of a team environment.  We leverage phone and video conference technologies frequently and engage in conversations to keep the interaction alive.  In many ways I feel as much a part of a team working from home as I ever felt working in a cubicle farm.
  4. Gartner is committed to the needs of our clients.  It’s a statement all companies make.  But I have found Gartner to really mean it.  I think a big reason for this is because each of us talk directly with our clients every single day.  It’s easy to remember your focus when we interact with that focus (our clients) routinely.  But Gartner as a company keeps investing in what our clients want also.  A great example are these two open positions.  Cloud Computing continues to be a fast growing and in high demand coverage area for our clients.  Therefore, we are hiring more experts.  These experts will get to work alongside already great cloud colleagues such as Lydia Leong, Alessandro Perilli, Chris Gaun, Gonzalo Ruiz, Drue Reeves, Douglas Toombs….and many, many others.
  5. Gartner stratifies its research.  Gartner has always been known as the best for CIO and Senior IT Leadership research.  But Gartner has also broadened and invested in other areas of resarch.  I work in the Gartner for Technical Professionals research division, an area of research aimed at senior level technology professionals (e.g., enterprise architects and engineers).  This research division completes a holistic research offering that other analyst firms simply cannot offer.  It also allows us internally to collaborate among analysts that specialize in all levels of an IT organization to deliver timely, accurate and tailored research to each individual in an IT organization and specific to their current role.

A while back, my colleague, Lydia Leong wrote two separate blog entries about working for Gartner that I will link here. I encourage you to also read her insights.

http://blogs.gartner.com/lydia_leong/2013/08/26/five-more-reasons-to-work-at-gartner-with-me/
http://blogs.gartner.com/lydia_leong/2011/12/12/five-reasons-you-should-work-at-gartner-with-me/

Do you love research, analysis and opportunities to expand your insight into IT and the industry as a whole?  Do you have a specific expertise in private, hybrid or public cloud right now?  If so, click the links at the top, apply, and hopefully join our great team!   I look forward to meeting you.  If you would like to engage in a private conversation first, please email me at kyle <dot> hilgendorf <at> gartner.com

1 Comment »

Category: Cloud Gartner     Tags: , ,

Cloud Exit Strategies – You DO need them!

by Kyle Hilgendorf  |  September 18, 2013  |  9 Comments

My colleague, Jay Heiser, also has a good take on this in his blog.  I will not repeat his thoughts.

Multiple media outlets have been reporting that Nirvanix, a popular public cloud storage provider is closing down and giving customers only two weeks (now reports are October 15 instead of September 30) to get their data off the service.  Further providing evidence to this fact, Gartner has been receiving client inquiry requests in the last 24 hours from Nirvanix customers asking for immediate planning assistance in moving off the Nirvanix service.

Gulp.

What are clients do to?  For most – react…and react in panic.  You have 2 weeks.  Go!  You don’t have time to worry about how much data you have stored there.  You don’t have time to upgrade network connections or bandwidth.  You don’t have time to order large drives or arrays to ship to the provider to get your data back.  You may not even get any support from the provider!  You may be facing the worst company fear – losing actual data.

Gartner has been advocating the importance of Cloud Exit Strategies to clients for some time.  In Gartner for Technical Professionals, we  even published a very comprehensive strategy document titled, “Devising a Cloud Exit Strategy: Proper Planning Prevents Poor Performance“.  I’m sad to say however, that compared to many other Gartner research documents, this document has not seen nearly the amount of demand or uptake from our clients.  Why is that?  I suspect it is because cloud exits are not nearly as sexy as cloud deployments – they are an afterthought.  It’s analogous to Disaster Recovery and other mundane IT risk mitigation responsibilities.  These functions rarely receive the attention they deserve in IT, except for immediately following major events like Hurricane Sandy or 9/11.

Does that mean this news regarding Nirvanix will be a catalyst for cloud customers to pay attention to the importance of building exit strategies?  Perhaps.

If you are a Nirvanix customer, it’s too late to build a strategy.  Drop whatever you are doing and get as much of the data as you can back immediately.

If you are a customer of any other cloud service (that is basically all of us) – take some time and build a cloud exit strategy/plan for every service you depend upon.  Cloud providers will continue to go out of business.  It may not be a frequent occurrence, but it will happen.  And even if your cloud provider does not go out of business, here is a list of many other factors which many signal you needing to exit a cloud service:

  • Provider’s services less reliable than advertised in SLAs, contracts or expressed expectations
  • Soured relationship with provider
  • Change in service levels
  • Change of provider ownership
  • Change of price
  • Change of terms and conditions
  • Expiration of enterprise agreement or contract
  • Lack of support
  • Data, security or privacy breach
  • Provider inability to stay competitive with industry features
  • Repeated or prolonged outages
  • Lack of remuneration for services lost
  • Change of internal leadership, strategy or corporate direction

Cloud customers, don’t delay.  All the risk mitigation tasks you would do if one of your in-house application vendors suddenly went out of business, ideally should be done in advance before leveraging cloud services. Exit strategies are important and necessary insurance policies.  Don’t be caught off guard.

9 Comments »

Category: Cloud Providers     Tags: , ,

vCloud Hybrid Service: My take from VMworld 2013

by Kyle Hilgendorf  |  August 28, 2013  |  12 Comments

This week at VMworld 2013, VMware announced the general availability of vCloud Hybrid Service (vCHS).  vCHS has been in an early adopter program for the last couple of months but will enter GA on Monday.

vCHS is VMware’s public IaaS offering which will attract comparison against AWS, Azure Infrastructure Services, and other’s in Gartner’s Cloud IaaS Magic Quadrant.  However, vCHS will be limited for a while.  At launch, vCHS is a US-only hosted service, although sure to expand to Europe and Asia in 2014.  Q4 of 2013 promises services like DR-as-a-service and Desktop-as-a-service, but more basic services that have come to be the norm at competing services like object-storage services will be missing for the foreseeable future.  In fact, many developer-enhancing and cloud-native services (e.g. auto scaling, continuous deployment, packaging) are not part of vCHS at launch.

My expectation is that the interest from enterprise customers will be very high around vCHS, so what is my early take?

First, VMware had to create a public cloud service offering.  AWS has changed the industry and created a market and VMware had no choice but to compete with a public IaaS offering.   VMware is the private datacenter virtualization and private cloud behemoth.  Yet, increasingly, customers are considering public cloud deployments for future state (cloud-native) applications.  As organizations are using public clouds for cloud-native applications and dev/test workloads, an inflection point is on the horizon for the 80-90% of all other workloads possibly moving to public cloud environments.  VMware did not want to find themselves left out of that future shift.  Therefore, VMware had to try on their own to enter this market.  If not that, then they would have had to find a way to partner with AWS.  As of today, they’ve not found such a partnership.

Second, VMware has a compelling opportunity.  Clients are hugely invested in VMware technology and there is reason to believe these same organizations are looking for quick and easy runways into the public cloud for traditional workloads.  Migrating or converting traditional workloads into AWS or Azure has been minimal at best.  No one vendor or provider has a better chance for success of “holding onto” VMware workloads than VMware itself.  VMware understands the importance of the network in a hybrid cloud environment and their opportunity with SDN, the NSX offering from Nicira and the ability to cross connect into vCHS data centers will help their hybrid cloud story.  Finally, a true hybrid cloud story centers around management, and VMware has a management opportunity in a better position than most major public CSPs who struggle greatly with native management.

Third, I don’t see vCHS impacting AWS negatively.  I do see it impacting a large market of many smaller or regional vCloud providers.  Because vCHS will be missing many of the features that AWS users have come to depend upon, I do not expect to see any exodus of AWS customers to vCHS.  VMware claims that vCHS and AWS will attract different buyers and that AWS does not focus on enterprise-grade or compliant workloads.  I disagree.  From 2006-2012, AWS did struggle capturing the enterprise buyer, but every movement AWS has made in 2012 and 2013 (and all future movements) are positioned directly at enterprise buyers and enterprise-grade applications.  Furthermore, few providers can compete with AWS on security and compliance capabilities.  However, with the price point of vCHS and with the traditional VMware feature set, many VMware providers, including VSPP’s will face a very fierce new competitor in vCHS.  VSPP’s will have to be extremely clear what value proposition they bring against vCHS (for example industry vertical specialization) or be relegated to reselling into vCHS.

Fourth, I’m intrigued by the vCHS franchise service design initially rolling out with Savvis.  VMware must expand domestically and internationally quickly.  They cannot do that on their own.  vCloud Datacenter Services was VMware’s first attempt to do this, but it mostly failed due to the various providers differing enough to erode compatibility.  With the vCHS franchise program, VMware owns and operates the vCHS architecture and the franchisees provide the location, network and facility hosting services.  VMware does not have a large portfolio of datacenters to compete on their own, nor do they have any significant ownership in WAN networking or Internet peering.  Savvis with Century Link brings the networking breadth to the relationship and other future franchisees will do much of the same internationally.  Expect the cross connects to be similar to AWS Direct Connect and that is a win for customers.  Both VMware and the franchisee can sell into the service and the benefit for the franchisee is potential viral hosting growth of vCHS in their facilities as well as the opportunity to upsell customers into managed hosting, colocation and network cross connects.  Franchising will not be easy though.  VMware will have to manage it very closely to ensure quality and consistency, much like McDonald’s corporate tightly oversees all franchise restaurants.  It’s about ensuring a consistent and stable user experience and that should not be understated.  But it is VMware’s opportunity to enter new locations very quickly.  It would not surprise me if vCHS is in as many or more locations as AWS and Azure within 12-18 months through franchising.

Fifth, expect there to be growing pains.  VMware hired a fantastic leader in Bill Fathers, formerly from Savvis.  Bill brings a great leadership background in running services and is already pushing vCHS into a 6-week release cycle – a concept foreign to traditional VMware products.  But vCHS is not a commodity, its a uniquely created service. Multiple vCloud providers have told me that VMware is in for a surprise with their own products in that VMware will start to find the breaking points of product scalability.  Therefore, I expect vCHS to go through similar growing pains that other major CSPs have gone through over the past few years.  vCHS will not be perfect, it will have outages and it may not be as seamless between franchisees as promised. Customers should know this and pay attention to how VMware responds in the midst of issues, rather than hold them to perfection.  And if customers cannot accept this risk today, they should wait on the sidelines or look to a provider with more years in the market.

Finally, the public IaaS provider market is starting to show some interesting segmentation lines.  AWS is the dominating force, but mega vendors in the form of Microsoft, VMware and Google have made their intentions known and the service development and innovation each company possesses is creating a line between mega providers and the rest of the market.

So what does vCHS mean to you?  Well, I think long term, many organizations will not be able to avoid using it in some capacity.  Even some of the largest AWS adopters will find a place where vCHS shines past AWS.  Perhaps its the DRaaS or DaaS offerings on the horizon.  Perhaps its simplified lift and shift of large pools of VMs.  Maybe its more seamless management between the private datacenter and a public Iaas offering.  Whatever the use case ends up being, there is plenty of room in this market for multiple providers and most organizations will want at least 2-3 strategic IaaS partners for properly placing workloads based on individual requirements.  WIth the saturation of VMware in the enterprise, vCHS will surely be a logical endpoint for many of those workloads.  But vCHS will come with a ramp up and improvement period.  For organizations that want to assess it on day 1 and on-going, Gartner’s “Evaluation Criteria for Public Cloud IaaS Providers” can help you there.

What are your thoughts on vCHS?

12 Comments »

Category: AWS Cloud Hybrid IaaS Microsoft Providers vCloud VMware     Tags: , , , , ,

Cloud Security Configurations: Who is responsible?

by Kyle Hilgendorf  |  April 2, 2013  |  3 Comments

A Rapid7 report surfaced last week that discovered some 126 billion AWS S3 objects were exposed to the general public.  AWS has since taken a brunt of security attacks by many blogs and tech magazines for their “lack of security”.  But I have to voice as an objective analyst, this is not the fault of AWS.

Security in S3 is binary for each object.  Private or Public.  Within private, there are a number of different settings one can employ.  Private is also the default security control for all S3 objects.  The AWS customer must manually go in and configure each individual object as “public”.  There might be very good reason for doing so.  For example, companies use S3 all the time to post public information that they want to share or make accessible for the world.  S3, and other object stores, are great for posting content such as public websites, videos, webinars, recordings, or pictures.  In other words, there might be very good reason why 126 billion objects are publicly accessible.

But for those objects that should not have been made public, the question really comes down to who is responsible – the provider or the customer?  I’ll argue this is the customer responsibility.  AWS offers customers what they want.  Security or public accessibility – the customer chooses.  There are reasons for both and customers have the power to choose.  Consider how many customers would be upset if AWS took away public accessibility options from S3.  I’d bet a large percentage of S3 customers would complain as S3 is great for publishing public websites and content.

If AWS has any fault here, it is making self-service and automation too smooth and easy, but isn’t that the goal of public cloud?  It is quite easy to create a bucket policy that opens up access to all current and future objects in a bucket for anonymous users and perhaps that is what happened to some of the more critical or private data that Rapid7 found in this study.  It is possible that one admin created a bucket policy and another admin or user uploaded sensitive data into the bucket unaware of the security configuration.  But at the same time, these bucket policies can be incredibly helpful for organizations that want to expose all objects in a bucket, for instance, a public web site.  However, the AWS management console does not provide simple visibility into what objects are accessible publicly or to anonymous users.  To gain this level of insight, you will need to have an understanding of the AWS S3 API.

But, in the end, customers are responsible.  Customers will always be responsible in the public cloud for their applications and their data – beware of configurations, features, and options.  I do not argue that many objects found in the report may have sensitive information inside, unfortunately user error or confusion could have led to the accidental public exposure of such objects.  Therefore, it is paramount that organizations employing public cloud services build not only clear governance practices, but also monitoring and alerting practices to raise awareness within the organization when digital assets may be exposed or not secured in the fashion that the data warrants.

 

3 Comments »

Category: AWS Cloud Providers     Tags: , ,