Fast Data Processing with Spark
Abstract
Spark is a framework for writing fast, distributed programs. Spark solves similar
problems as Hadoop MapReduce does but with a fast in-memory approach and a clean
functional style API. With its ability to integrate with Hadoop and inbuilt tools
for interactive query analysis (Shark), large-scale graph processing and analysis
(Bagel), and real-time analysis (Spark Streaming), it can be interactively used to
quickly process and query big data sets. Fast Data Processing with Spark covers how
to write distributed map reduce style programs with Spark. The book will guide you
through every step required to write effective distributed programs from setting up
your cluster and interactively exploring the API, to deploying your job to the
cluster, and tuning it for your purposes. Fast Data Processing with Spark covers
everything from setting up your Spark cluster in a variety of situations
(stand-alone, EC2, and so on), to how to use the interactive shell to write
distributed code interactively. From there, we move on to cover how to write and
deploy distributed jobs in Java, Scala, and Python. We then examine how to use the
interactive shell to quickly prototype distributed programs and explore the Spark
API. We also look at how to use Hive with Spark to use a SQL-like query syntax with
Shark, as well as manipulating resilient distributed datasets (RDDs).
