Blog post

Hadoop’s Achilles Heel in 2015

By Nick Heudecker | December 26, 2014 | 2 Comments

Data and Analytics Strategies

Recently my colleague Merv Adrian and I spoke with a top four financial services firm about its upcoming Hadoop deployment. After several months of testing, the ten-node pilot was moving to production in a few weeks. We asked if any consulting firms or systems integrators helped with the project and the response was that Hadoop skills were simply unavailable.

If a top-tier finserv company can’t find Hadoop skills, how will other enterprises?

Hadoop is a complicated beast. Commercial distributions frequently comprise more than two dozen projects, each with its own pace of development, version incompatibilities and bugs. Just getting Hadoop running – let alone processing data – is a substantial achievement requiring a variety of skills. Actually doing something with the data requires another set of uncommon, if not rare, skills.

Enterprise adoption of Hadoop will be significantly impacted in 2015 unless Hadoop becomes drastically simpler to deploy and manage or the pool of available talent increases exponentially. With new components being added almost daily (Spark, Flink, Hama, etc.), the former doesn’t look possible. The availability of Hadoop skills must increase, and increase rapidly, for meaningful adoption to occur.

What are you doing to develop Hadoop skills at your company? Let me know in the comments.

gartner_bi_vegas

Leave a Comment

2 Comments

  • Alan Grogan says:

    If you look at any Big Consulting or IT firm’s recruitment site you will see many Big Data roles open (and for many months). The problem I see is that the excellent Data Scientists I coach or have the pleasure to manage either refuse to work with/ for them due to corporate processes, or they’ve interviewed but been turned down at interview stage for no technical reason, despite the roles needing them >90% technical. I also add that many Data Scientists will also not work directly for a Financial Services firm due to the sector’s perceived weakness in IT support and lack of credible Proof-Of-Concept environments.

    I feel the issue is not that the support is not available, but that large organisations can fall into the trap to think Big Data needs a large IT or Consulting firm to help. I don’t blame them as the large Big Data ‘suppliers’ probably spend as much on Marketing in this space as capability.

    Successful Big Data proving and deployment is rather straight forward, the issue is once a client comes across a successful Big Data consulting firm (usually by word of mouth), is (1) getting access to the necessary client internal data, (2) Obtaining the needed suppliers on the PSL and (3) confirming preferred support model in BAU (Inhouse/ Managed/ Offshore/ Nearshore/ Onshore).

  • Shil Desai says:

    I believelarge organisations can not do this due to multiple reasons. World is moving towards custom. Niche companies/skills are needed to address specific problems. So we see lot of startups rising in these niche space. I recommend all companies working in this space must invest and do below to keep their foot with dynamic needs of big data.

    1. Create/ think of all possible domain specific use cases and prototype on basic infrastructure
    2. Knowledge share with your team and create workforce
    3. Do above regularly

    For clients
    1. Choose the partner carefully
    2. Trust the implementation partner
    3. Go for pragmatic approach
    4. Do regular health checkups for such projects