Optimizing Apache Spark & Tuning Best Practices

25 April, 2024Amsterdam, The Netherlands

2 days
In Person
Data Engineering

As data scales up, efficiently processing data becomes more crucial. Building on our experience as one of the world’s most significant Apache Spark users, this 2-day course provides an in-depth overview of the do’s and don’ts of one of the most popular analytics engines available. 

Book this training

Book now

Looking to upskill your team(s) or organization? 

Diego will gladly help you further with custom training solutions. 

Diego Teunissen
Data and AI Training Advisorvisor

+31 6 1591 4440
Diego.Teunissen@xebia.com
linkedin.com/in/diego-teunissen/

Get in touch

Duration

2 days

Time

09:00 – 17:00

Language

English

Lunch

Included

Certification

No

Level

Professional

What will you learn?

After the training, you will be able to:

Explain what Apache Spark does under the hood.

Use best practices to write performant code.

Read and understand the query plans for your Spark applications.

Explain the Spark fundamentals, including the execution model: Driver/Executors.

Efficiently work with caching, shuffle service, and fair scheduling.

Troubleshoot optimization problems and memory issues.

Program

The trainer facilitates the content using notebooks hosted in a cloud environment. Each participant will have a Spark cluster to experiment with. 

  • Download & understand dataset used during training
  • Theory about various Spark basics and Spark UI
  • Apply optimisations in practice

This training is for you if:

You are comfortable using Spark but want to learn how optimizations can be applied to improve runtime

You want to learn how Spark works fundamentally – from text, to plan, to execution.

You are comfortable using Python.

This training is not for you if:

You don’t use Python with Spark (PySpark)

You want to learn how to transform notebook code into production-ready code (check out our Production-Ready Machine Learning course instead)

You want to learn how to use Databricks (this course is based on open-source Spark and is applicable to Databricks, but we are not covering Databricks concepts such as Jobs, Notebooks, Sharing, Repos, connectors, Databricks-Runtimes, etc.)

Why should I follow this training?

 Learn about Apache Spark, using best practices to write performant code and tweaking and debugging Spark applications. 

Grasp the Spark fundamentals, including the execution model: Driver/Executors, caching, shuffle service, and fair scheduling. 

Learn from and network with Apache Spark data experts. 

What else
should I know?

After registering for this training, you will receive a confirmation email with practical information. A week before the training, we will ask you about any dietary requirements and share literature if you need to prepare.

See you soon!

All literature and course materials are included in the price. 

After registering for this course, you will receive a confirmation email with practical information. 

Also interesting for you

View all trainings
MLOps on AWS

Discover what MLOps is and how you can apply it in AWS (Amazon Web Services) with our MLOps on AWS training course.

AWS
Cloud
Data Engineering
Machine Learning
View training
MLOps on GCP

Discover what MLOps is and how you can apply it in GCP (Google Cloud Platform) with our MLOps on GCP training course.

Data Analytics
Data Engineering
Data Science
Google Cloud Platform (GCP)
3 days
In Person

Next:

29 May, 2024

From:

€1995

View training
Apache Airflow Training 

Master Apache Airflow’s workflow magic. Seamlessly schedule, monitor, and optimize workflows. Elevate your automation game today

Data Engineering
2 days
In Person

Next:

17 – 18 Jun, 2024

From:

€1465

View training
Data Processing at Scale

Learn to use Apache Spark to process large sets of data.

Data Engineering
View training