Fast Data Applications with Spark and Python Workshop on February 7th

Data Community DC and District Data Labs are hosting a full-day Fast Data Applications with Spark and Python workshop on Saturday February 7th. For more info and to sign up, go to http://bit.ly/Zhj0y1. Register before January 23rd for an early bird discount!

Overview

Hadoop has made the world of Big Data possible by providing a framework for distributed computing on economical, commercial off-the-shelf hardware. Hadoop 2.0 implements a distributed file system, HDFS, and a computing framework, YARN, that allows distributed applications to easily harness the power of clustered computing on extremely large data sets. Over the past decade, the primary application framework has been MapReduce - a functional programming paradigm that lends itself extremely well to designing distributed applications, but carries with it a lot of computational overhead.

Many excellent analytical applications and algorithms have been written in MapReduce, creating an ecosystem that has made Hadoop continue to grow as an effective tool. However, more complex algorithms, especially machine learning algorithms, often require extremely complex chains of jobs to conform to the MapReduce functional paradigm. Enter Spark, an open source Apache project that uses the cluster resource daemons of Hadoop (particularly HDFS and other Hadoop data stores) but allows developers to break out of the MapReduce paradigm and write distributed applications that are much faster.

Spark also distributes applications to a cluster by using distributed executor processes- Spark developers write applications that are intended to work on local data; however unlike with MapReduce, these executors are in communication with each other and can share data via an external store. Spark is intended to work with Hadoop data stores, but can be run in a stand alone mode, or if you already have a Hadoop 2.0 cluster- then Spark can be run with YARN. The flexibility that Spark provides means that it can be used to implement more complex algorithms and applications previously unavailable to MapReduce patterns.

Spark can run in memory, making it hundreds of times faster than disk based MapReduce, and provides a programming API in Scala, Java, and Python - making it more accessible to developers. Spark has an interactive command line interface to quickly interact with data on the cluster, and applications for writing SQL-like queries with Spark and a fairly complete Machine Learning library. Importantly, it can also execute Graph algorithms that were previously unable to be ported to MapReduce frameworks.

What You Will Learn

In this one day workshop, we will introduce Spark at a high level context. Spark is fundamentally different than writing MapReduce jobs so no prior Hadoop experience is needed. You will learn how to interact with Spark on the command line and conduct rapid in-memory data analyses. We will then work on writing Spark applications to perform large cluster based analyses including SQL-like aggregations, machine learning applications, and Graph algorithms and search. The course will be conducted in Python using PySpark.

Course Outline

The workshop will cover the following topics:

  • Interacting with Spark via the Spark Shell
  • Interacting with RDDs and other distributed data
  • Creating Spark applications in Python
  • Submitting Spark applications to the cluster
  • Aggregations and Queries using Spark SQL
  • Machine Learning with Spark MLLib
  • Graph computing with Spark GraphX

After this course you should understand how to build distributed applications using Python and Spark, particularly for conducting analyses. You will be introduced to Spark applications and be able to run Spark SQL queries on a distributed database, conduct machine learning with SparkMLib, and execute graph algorithms with Spark GraphX.

Instructor: Benjamin Bengfort

Benjamin is an experienced Data Scientist and Python developer who has worked in military, industry, and academia for the past eight years. He is currently pursuing his PhD in Computer Science at The University of Maryland, College Park, doing research on computational intelligence. He holds a Masters degree from NDSU and is also adjunct faculty at Georgetown University where he teaches Data Science and Analytics. He is the author of the books Practical Data Science Cookbook and Hadoop Fundamentals for Data Scientists.