We received over 300 outstanding submissions this year. With such great content to choose from, we are sure the agenda will be packed with a ton of valuables sessions.
We expect to open the Community Choice voting process next week. Community Choice is a free voting system that allows the anybody in the Hadoop community to vote for their favorite topics. The top vote getters in each track will automatically be added to the Hadoop Summit Europe agenda! Be sure to cast your vote and help choose the content you want to see at Hadoop Summit.
Sessions voted on by the public and selected by a committee of industry luminaries.
Come hear it from the elephant's mouth. You'll get deep-dive technical content from committers across a wide range of advanced/basic topics and projects.
Sessions will be rated on how technical vs business focused they are intended allowing you to pick the right content based on your background.
Hadoop Summit will host community meet-ups and birds of a feather sessions led by leaders in the Hadoop Community.
The only Hadoop Community event in Europe.
Hadoop Summit will play host to the "10 Years of Hadoop" party, the biggest Hadoop celebration in Europe. Hadoop creators committed their first line of code 10 years ago. Join us to celebrate that achievement that has changed the face of business.
Cooperative data exploration with IPython notebook | January 28, 2016
There are a lot of great tools that can help us work with Big Data, but they all require a lot of resources. How can we ease the burden on this CPU/RAM demand? One way to do it is to share the data we are working on and the results of our computations with others.
A Beginners Guide to Becoming an Apache Contributor | January 20, 2016
So if you are interested in the workings of the open source community,the economic incentives,to understand how a rag-tag band of developers can come up with so much of quality output and how you can contribute to it. Then this is a light-hearted step by step walk through of someone who jumped in after looking from the outside for too long.
Advanced Execution Visualization of Spark jobs | January 20, 2016
Understanding the physical plan of a big data application is often crucial for tracking down bottlenecks and faulty behavior. Apache Spark although offering useful Web UI component for monitoring and understanding the logical plan of the jobs, lacks a tool that helps to understand the physical plan of the task scheduler and the possibility to monitor execution at a very low level, along with the communication triggered by RDDs and remote block-requests...