1/9/12

Introducing Hadoop and Big Data into a Traditional Data Organization: A True Story and Learned Lessons


As some of you may know, one part of my consulting and mentoring work revolves around data, particularly around BigData and NoSQL systems. Together with a client of mine we are building one such system for applications in healthcare. We are very excited that we got invited to speak at the Enterprise Data World 2012 conference in Atlanta. How could we refuse an opportunity to show of our work? Care to come to Atlanta and join us? For now, as a teaser, here is what are we going to talk about.
A real life story about introducing and integrating Hadoop, BigData and NoSQL into an organization to reduce cost and speed up the data processing. Come and learn how to make it work!

In this talk we will share our story about our journey into Hadoop, Big Data, Map Reduce, and NoSQL with the goal to reduce cost and improve speed of data processing. In our journey, we start with Hadoop and its Map Reduce algorithm that splits processing across many commodity machines. We found this to be an effective solution, albeit not without warts. From our experiences, you will learn how to effectively introduce Hadoop into a conventional data processing organization and about the approaches to integrate it not only with conventional data processing technologies, but also with people. As our appetite grew, we had to reach towards NoSQL storage.  You will learn about how to migrate from local deployment to the cloud.  We conclude with "7 Habits of Successful Hadoop Projects".

Topics
•    Hadoop and Big Data: why do we care?
•    Avoiding friction and integrating with relational databases
•    Preventing shock: the people issue
•    The buzz of Hive
•    When Hadoop alone is not enoughNext: into the Cloud!
•    7 Habits of Successful Hadoop Projects

I'll publish the slides when they become available after the conference. And now, excuse us, we have to attend our Hadoop cluster ;-)