Hadoop World 2011: A Glimpse into Operations
Find sessions of interest and begin planning your Hadoop World experience among the sixty breakout sessions spread across five simultaneous tracks at http://www.hadoopworld.com/agenda/.
The Operations track at Hadoop World 2011 focuses on the practices IT organizations employ to adopt and run Apache Hadoop with special emphasis on the people, processes and technology. Presentations will include initial deployment case studies, production scenarios or expansion scenarios with a focus on people, processes and technology. Speakers will discuss advances in reducing the cost of Hadoop deployment and increasing availability and performance.
Unlocking the Value of Big Data with Oracle
Jean-Pierre Dijcks, Oracle
Analyzing new and diverse digital data streams can reveal new sources of economic value, provide fresh insights into customer behavior and identify market trends early on. But this influx of new data can create challenges for IT departments. To derive real business value from Big Data, you need the right tools to capture and organize a wide variety of data types from different sources, and to be able to easily analyze it within the context of all your enterprise data. Attend this session to learn how Oracle’s end-to-end value chain for Big Data can help you unlock the value of Big Data.
Hadoop as a Service in Cloud
Junping Du, VMware
Hadoop framework is often built on native environment with commodity hardware as its original design. However, with growing tendency of cloud computing, there is stronger requirement to build Hadoop cluster on a public/private cloud in order for customers to benefit from virtualization and multi-tenancy. This session discusses how to address some the challenges of providing Hadoop service on virtualization platform such as: performance, rack awareness, job scheduling, memory over commitment, etc, and propose some solutions.
Hadoop in a Mission Critical Environment
Jim Haas, CBS Interactive
Our need for better scalability in processing weblogs is illustrated by the change in requirements – processing 250 million vs. 1 billion web events a day (and growing). The Data Waregoup at CBSi has been transitioning core processes to re-architected Hadoop processes for two years. We will cover strategies used for successfully transitioning core ETL processes to big data capabilities and present a how-to guide of re-architecting a mission critical Data Warehouse environment while it’s running.
Ravi Veeramchaneni, NAVTEQ
Many developers have experience in working on relational databases using SQL. The transition to No-SQL data stores, however, is challenging and often time confusing. This session will share experiences of using HBase from Hardware selection/deployment to design, implementation and tuning of HBase. At the end of the session, audience will be in a better position to make right choices on Hardware selection, Schema design and tuning HBase to their needs.
Hadoop Troubleshooting 101
Kate Ting, Cloudera
Attend this session and walk away armed with solutions to the most common customer problems. Learn proactive configuration tweaks and best practices to keep your cluster free of fetch failures, job tracker hangs, and other common issues.
I Want to Be BIG – Lessons Learned at Scale
David “Sunny” Sundstrom, SGI
SGI has been a leading commercial vendor of Hadoop clusters since 2008. Leveraging SGI’s experience with high performance clusters at scale, SGI has delivered individual Hadoop clusters of up to 4000 nodes. In this presentation, through the discussion of representative customer use cases, you’ll explore major design considerations for performance and power optimization, how integrated Hadoop solutions leveraging CDH, SGI Rackable clusters, and SGI Management Center best meet customer needs, and how SGI envisions the needs of enterprise customers evolving as Hadoop continues to move into mainstream adoption.