Up to 50% Performance Gain on Hadoop* Clusters? You Bet Your Tweets.

Processing over 1 trillion events per day, Twitter is one of the largest Hadoop* users in the world—typical clusters contain over 100,000 HDDs, half a million compute threads, and an exabyte of physical storage.

But there was scaling problem. The company’s configuration was reaching an I/O performance limit that could not be solved by simply adding more and bigger HDDs due to space and power limitations.

Join Milind Damle, Senior Director of Intel Big Data Technologies, to find out how Twitter got a new handle on this ocean of data, including how they:

  • Reduced runtimes by up to 50% on existing hardware
  • Removed a storage I/O bottleneck that enabled them to increase processor utilization
  • Achieved higher data center density by reducing the number of required HDDs
  • Increased total cost of ownership (TCO) savings by a projected 30%

Get the software
Intel® VTune™ Amplifier Platform Profiler—This feature is included in the standalone Intel® VTune™ Amplifier tool. Free.

More resources

Milind Damle, Senior Director, Big Data Technologies, Intel Corporation

Milind is a Senior Director of Big Data technologies at Intel and leads a responsible for performance analysis, tuning, optimization, and benchmarking of big data workloads and applications on Intel® Architecture (IA) and other platforms. Additionally, this team delivers new features into Apache hadoop* and Spark* projects, and helps internal and external customers incorporate them into their respective IA optimizations. Milind joined Intel in 2002 and has a Master’s in Computer Science and Engineering from the Indian Institute of Technology in Mumbai.

Performance varies by use, configuration, and other factors. Learn more at www.Intel.com/PerformanceIndex.