The 2018 Big Data Activation Report also reveals that Presto and Apache Spark are the fastest growing engines, with both showing significant spikes in usage. Presto usage has especially surged, experiencing a 420 percent growth in compute hours and 365 percent expansion in total number of commands run.
“What’s most evident from these findings is that organizations are understanding the immense value of data and are adopting new technologies and processes in search for new insights and revenue opportunities,” said Qubole Co-founder and CEO, Ashish Thusoo. “With three-quarters of businesses actively using multiple engines for their data programs, it’s clear that data activation strategies are becoming more nuanced in matching the best tool for the individual job.”
According to the report, Apache Spark also saw a significant increase in both measurements with compute hours growing by 298 percent and overall commands growing by 439 percent. While more modest, Apache Hadoop/Hive did see growth with the number of commands increasing 129 percent throughout the year. Apache Hadoop continued to see the largest number of compute hours however, with 45.5 million hours logged throughout the year.
Additionally, the number of users accessing each platform grew throughout the year, painting a picture of how businesses are making data more accessible, reducing bottlenecks within data teams and addressing the ongoing talent shortage within the data science, engineering and analyst community. Presto saw the largest increase in users running commands on the platform, experiencing a growth of 255 percent in individuals running commands throughout the past year. Apache Spark and Hadoop also both saw significantly more people initiating commands, seeing a growth of 171 percent and 136 percent respectively.
Other key findings from the 2018 Big Data Activation Report included:
The magnitude of big data workloads remains vast: The report shows more than 58 million commands processed across Presto, Apache Hadoop/Hive, and Apache Spark between 2012-2017.
The need for self-sufficiency scales as the company grows: Companies are actively lessening their reliance on administrators as they grow with the report showing an average of 16 users per administrator for small implementations, while medium-sized implementations saw the ratio grow to 48 to 1; and for large-scale implementations it rose to 188 to 1.
Businesses utilizing spot instances to reduce costs: Total usage of Amazon EC2 Spot Instances grew 4.6 times across all three engines since January 2016. Apache Hadoop/Hive 2 saw the highest growth in spot instances per cluster, averaging 61.5 percent and marking a 633 percent growth throughout the year.
New tools are gaining adoption: Almost 30 percent of companies have used Apache Airflow for orchestrating sophisticated data preparation pipelines and operationalizing machine learning using Python code.
The report was released at the Data Platforms Live 2018, the only industry conference focused exclusively on helping data teams build the modern, self-service data platform. This year’s event saw keynote speeches by both Sumit Gupta, VP, AI, Machine Learning and HPC at IBM Cognitive Systems and Ashish Thusoo, Co-founder and CEO at Qubole, who bought their unique insights to how companies can demystify AI and cognitive computing and better implement strategies to gain true business value from their big data programs.
The full report is available here and was based on anonymized data from a sample of over 200 Qubole customers.