Unravel for Spark
Overview Demo

Video Transcript

Hi! I’m Marlon Brando with Unravel Data.

I’m going to show you how we’re helping similar data teams like yourself simplify their big data operations and optimize their big data application performance.

Can you relate to the following challenges:

  • lack of visibility of big data usage
  • not having a good understanding of your data pipelines
  • unable to control or manage runaway jobs
  • unable to quickly identify an efficient or failing big data application

Unravel provides a Big Data APM and data ops platform with an automated performance recommendation engine for applications like Spark, Hive, Kafka and HBase. We simplify your big data application across your entire distributed data stack. We help you run more jobs in a cluster with less resources and show you what teams or groups or users are running bad applications that are slowing down or breaking the cluster.

Let’s hop into the platform. This is the application view and this is where your application developers, data platform engineers, DevOps teams start off with having full transparency of what’s going on in your Big Data environment. It can be on-prem, in the cloud, hybrid or multi-cloud.

We provide high-level information about the type, status, user, ID of the application, time information, rewrite and much more. You can also filter it by app type and by status. You can also tag the applications by Department, project user, and many more options as well.

Spark Demo

Let’s focus on Spark applications for this demo here. As you can see there are some applications that have a blue toolbar widget associated to them. That means that we’ve come up with fine-tuning and enhancement actionable intelligence or recommendations that you can employ. Let’s check these out.

Looking at one of these applications this is where you’ll see high-level information of the application again duration information data al, number of jobs, and stages. Understanding how many jobs are part of this application as well. We will give you more detail information like code, in this case Scala. We can provide it in Python or Sequel as well. Different tasks on attempts, graphs from a containers CPU memory perspective, the resources executors and drivers and you can define what parameters you want look into weave your timeline information there.

On the left hand side like I mentioned different jobs of this application provide an execution graph, a map of what’s going on with this application. you can hover over these boxes will provide more stage information. Now the great thing is if you click into any boxes it will take you search the code as well.

Gantt chart information. I think it’s super helpful especially understand what a timeline view. You can see in this Spark application, the first job seems to be taking most time, it might be good to look into understand what is bottlenecking.  Take you to the stage level and even there you can drill down into more of a timeline view understanding data skew for the specific stage, the different components involved their, understanding are they running serially are they running parallel and understanding the other different components like schedules as well.

Now from this perspective and I am going to go back to the top look at the code so spotlighting into more of a high-level view. Now addition to providing all this easy accessible and detailed info, we provide recommendations actual intelligence as well to improve efficiency and reliability of your applications.

Let’s check that out. This is like the icing on the cake. In plain English, we provide what the issue is why for example specific aspects, jobs, phases, of this application of failing in red here and green we recommend to mitigate the failure. Efficiencies in plain English as well red (what the issue is in this case a more parallel setup) and recommendations in green (load imbalancing and executors). I think a common issue that always pops up is from memory perspective already caching.

Now looking at this Spark application that took 12 minutes to run just by implementing one of these recommendations we can bring this runtime down to two minutes. Now that’s a 600 percent improvement. That’s pretty significant now. This is for one application a two to three node demo environment.

Now just imagine what the impact would be for a thousand plus, no cluster, multiple applications, multiple data pipelines, Spark, Kafka, Hive and any combination thereof.

That’s the end of this Spark demo and there’s a lot more that Unravel has to offer and we can show just going to give you some tips here. This is a chargeback show back report provide more detailed information tag by department or user or project. Associate or understand what or who has been utilizing what resources and what the costs associated is.

Cloud migration reporting is very helpful to understand your benchmark in your current Big Data environment, what would make sense to migrate? Is it more lift and shift or by use case? And then also a heatmap. As well last but not least more about data insights view understanding the tables and partitions, understanding hot, cold, or warm data in your environment and what may might be useful to migrate to more cheaper stores like a glacier storage. These short demo videos will be coming up and running shortly so stay tuned.

Thanks.

End Transcript

See Unravel in Action.