Performance & Data Eng

Spark APM – What is Spark Application Performance Management

What is Spark? Apache Spark is a fast and general-purpose engine for large-scale data processing. It’s most widely used to replace MapReduce for fast processing of data stored in Hadoop. Spark also provides an easy-to-use API […]

  • 3 min read

What is Spark?

Apache Spark is a fast and general-purpose engine for large-scale data processing. It’s most widely used to replace MapReduce for fast processing of data stored in Hadoop. Spark also provides an easy-to-use API for developers.

Designed specifically for data science, Spark has evolved to support more use cases, including real-time stream event processing. Spark is also widely used in AI and machine learning applications.

There are six main reasons to use Spark.

1. Speed – Spark has an advanced engine that can deliver up to 100 times faster processing than traditional MapReduce jobs on Hadoop.

2. Ease of use – Spark supports Python, Java, R, and Scala natively, and offers dozens of high-level operators that make it easy to build applications.

3. General purpose – Spark offers the ability to combine all of its components to make a robust and comprehensive data product.

4. Platform agnostic – Spark runs in nearly any environment.

5. Open source – Spark and its component and complementary technologies are free and open source; users can change the code as needed.

6. Widely used – Spark is the industry standard for many tasks, so expertise and help are widely available.

A Brief History of Spark

Today, when it comes to parallel big data analytics, Spark is the dominant framework that developers turn to for their data applications. But what came before Spark?

In 2003, several developers, mostly based at Yahoo!, started working on an open, distributed computing platform. A few years later, these developers released their work as an open source project called Hadoop.

This is also approximately the same time that Google created a Java interface called MapReduce that they used to work with their volumes of data. While Hadoop grew in popularity in its ability to store massive volumes of data, at Facebook, developers wanted to provide their data science team with an easier way to work with their data in Hadoop. As a result, they created Hive, a data warehousing framework based on Hadoop.

Even though Hadoop was gaining wide adoption at this point, there really weren’t any good interfaces for analysts and data scientists to use. So, in 2009, a group of people at the University of California Berkeley ampLab started a new project to solve this problem. Thus Spark was born – and was released as open source a year later.

What is Spark APM?

Spark enables rapid innovation and high performance in your applications. But as your applications grow in complexity, inefficiencies are bound to be introduced. These inefficiencies add up to significant performance losses and increased processing costs.

For example, a Spark cluster may have idle time between batches because of slow data writes on the server. Batch modes become idle because the next batch can’t start until all of the previous tasks haven’t been completed yet. Your Spark jobs are “bottlenecked on writes.”

When this happens, you can’t scale your application horizontally – adding more servers to help with processing won’t improve your application performance. Instead, you’d just be increasing the idle time of your clusters.

Unravel Data for Spark APM

This is where Unravel Data comes in to save the day. Unravel Data for Spark provides a comprehensive full-stack, intelligent, and automated approach to Spark operations and application performance management across your modern data stack architecture.

The Unravel platform helps you analyze, optimize, and troubleshoot Spark applications and pipelines in a seamless, intuitive user experience. Operations personnel, who have to manage a wide range of technologies, don’t need to learn Spark in great depth in order to significantly improve the performance and reliability of Spark applications.

See Unravel for Spark in Action

Create a free account

Example of Spark APM with Unravel Data

A Spark application consists of one or more jobs, each of which in turn has one or more stages. A job corresponds to a Spark action – for example, count, take, for each, etc. Within the Unravel platform, you can view all of the details of your Spark application.

Unravel’s Spark APM lets you:

  • Quickly see which jobs and stages consume the most resources
  • View your app as a resilient distributed dataset (RDD) execution graph
  • Drill into the source code from the stage tile, Spark stream batch tile, or the execution graph to locate the problems

Unravel Data for Spark APM can then help you:

  • Resolve inefficiencies, bottlenecks, and reasons for failure within apps
  • Optimize resource allocation for Spark drivers and executors
  • Detect and fix poor partitioning
  • Detect and fix inefficient and failed Spark apps
  • Use recommended settings to tune for speed and efficiency

Improve Your Spark Application Performance

To learn how Unravel can help improve the performance of your applications, create a free account and take it for a test drive on your Spark applications. Or, contact Unravel.