Limit this search to....

Frank Kane's Taming Big Data with Apache Spark and Python: Real-world examples to help you analyze large datasets with Apache Spark
Contributor(s): Kane, Frank (Author)
ISBN: 1787287947     ISBN-13: 9781787287945
Publisher: Packt Publishing
OUR PRICE:   $39.89  
Product Type: Paperback - Other Formats
Published: June 2017
Qty:
Additional Information
BISAC Categories:
- Computers | Data Modeling & Design
- Computers | Databases - Data Mining
Physical Information: 0.62" H x 7.5" W x 9.25" (1.13 lbs) 296 pages
 
Descriptions, Reviews, Etc.
Publisher Description:

Frank Kane's hands-on Spark training course, based on his bestselling Taming Big Data with Apache Spark and Python video, now available in a book. Understand and analyze large data sets using Spark on a single system or on a cluster.

Key Features

  • Understand how Spark can be distributed across computing clusters
  • Develop and run Spark jobs efficiently using Python
  • A hands-on tutorial by Frank Kane with over 15 real-world examples teaching you Big Data processing with Spark

Book Description

Frank Kane's Taming Big Data with Apache Spark and Python is your companion to learning Apache Spark in a hands-on manner. Frank will start you off by teaching you how to set up Spark on a single system or on a cluster, and you'll soon move on to analyzing large data sets using Spark RDD, and developing and running effective Spark jobs quickly using Python.

Apache Spark has emerged as the next big thing in the Big Data domain - quickly rising from an ascending technology to an established superstar in just a matter of years. Spark allows you to quickly extract actionable insights from large amounts of data, on a real-time basis, making it an essential tool in many modern businesses.

Frank has packed this book with over 15 interactive, fun-filled examples relevant to the real world, and he will empower you to understand the Spark ecosystem and implement production-grade real-time Spark projects with ease.

What you will learn

  • Find out how you can identify Big Data problems as Spark problems
  • Install and run Apache Spark on your computer or on a cluster
  • Analyze large data sets across many CPUs using Spark's Resilient Distributed Datasets
  • Implement machine learning on Spark using the MLlib library
  • Process continuous streams of data in real time using the Spark streaming module
  • Perform complex network analysis using Spark's GraphX library
  • Use Amazon's Elastic MapReduce service to run your Spark jobs on a cluster