This year, we are celebrating the 10th anniversary of Hadoop. In 2006, we lived in the dark ages of data: Several databases were old news and a single Gartner analyst was covering them all. Today, a whole team of Gartner database analysts could barely handle all the new options.
In summer 2009, baby-elephant Hadoop made its first steps: a Hadoop distribution was introduced by an unknown startup Cloudera. HDFS and MapReduce became top level Apache projects. Ominously, this very summer of 2009, a CRM icon Tom Siebel was trampled by a charging elephant during an African safari. We didn’t know back then that the Era of CRM came to its end, and the Era of Data began.
Hadoop revealed to the world the power of data. It evangelized the idea of distributed computing and massively-parallel processing. It brought compute to the data, rather than brining data to compute. It was the center of a quickly growing open source community and a vibrant ecosystem.
Hadoop was a move from a single product to a stack of many components. It demonstrated that you could apply different components to the same data stored in HDFS, and you could interpret this data differently. For example, your company can use log files to detect cyberattacks, to analyze customer behavior and to monitor data center operations. The ecosystem participants could also access the data in HDFS and apply its products.
The number of Hadoop components kept growing, and the confusion on what to use when was growing too. In 2013, an article “Dear Gartner, you are wrong” described the merits of Hadoop. I was an epitome of Gartner in this article, for pointing out in a blog post that Hadoop was falling into the trough of disillusionment.
In 2014, Spark became a top level Apache project. Its creators learned the lessons of the Hadoop ecosystem and delivered an easy-to-use, general purpose analytical platform. The same people who were saying “Gartner, you are wrong,” changed their stance. They were proclaiming: Hadoop is hard, MapReduce is legacy, Spark is the way to go.
Spark turned the attention of the Hadoop ecosystem from data to analytics. HDFS was storage, and Spark was compute on this storage. Spark reduced Hadoop to HDFS and YARN. Spark could also use storage like Cassandra, and Mesos, and RDBMSs. Not just Spark, many other Hadoop ecosystem projects, could run independently of the Hadoop core — Kafka, for example. Multiple compute workloads required to scale storage and compute independently. This was exactly what cloud vendors were in the perfect position to do. Moreover, they could provide compute on the storage: through their own offerings and through their marketplaces. The main storage in the cloud was object storage, not HDFS.
Cloud vendors jumped on the idea of data lakes, which thrives on separation of compute and storage. But many people argued that successful decoupling of storage and compute depends on eliminating or minimizing performance degradation compared to the coupled workloads. Nowadays, infrastructure can meet most performance goals: more memory and better memory, new algorithms for storing data, getting smarter about caching on the compute side. Networks are reaching the levels that allow customers read terabytes of data in seconds directly from storage.
Recently, “Hadoop” disappeared from the names of the platforms offered by the Hadoop distribution vendors, and even the greatest advocate of data lakes – Hortonworks – stopped talking about the lakes. Maybe, because data lakes are evaporating to the cloud?
Hadoop will definitely survive way beyond its 10-year anniversary, like CRM did. Distributed computing and massively parallel processing popularized by Hadoop will prevail. But newer technologies and the rise of the cloud will change Hadoop to something else. Hadoop is dead, long live Hadoop!
Follow Svetlana on Twitter @Sve_Sic