From Concepts to Cutting-Edge Data Engineering
Learn how to take data and AI concepts from concept to prototype and to production-ready application. Acquire the skills to develop and run Data and AI solutions at an enterprise-scale with ease! Take part in a specific training or advance through the entire journey. Learn how to build secure data platforms and reliable AI applications that are engineered for scale.
The Learning Journey for Data Engineers
How do you become a data engineering expert? Start here! We’ve put together a carefully crafted learning journey for data engineers. Knowing engineers love to figure things out on their own, we packed the program with opportunities to learn, hands-on, by solving real-life situations. Plus, there’s plenty of practical philosophy, too.
We’ll teach you how to leverage Docker to ease your deployments and navigate code written by data scientists ( Advanced Python and Data Science in Production). You will learn to use Apache Airflow, Apache Spark, and Kafka like a forklift to move data around. And we won’t shy away from proven technologies either, like ElasticSearch. We also remain on the cutting edge with others, like Apache Flink.
Download training brochure
Download the GoDataDriven brochure for a complete overview of available training sessions and data engineering, data science, and analytics translator learning journeys.
Junior Data Engineer
Learning Goals for a Junior Data Engineers
- Writes correct and clean code with guidance
- Participates in the technical design of features with guidance
- Knows how to integrate CI/CD concepts into their daily coding
- Able to create simple pipelines without guidance
- Knows how containerization works, and what it simplifies
- Can write and push containers
- Containers & CI/CD / 2-days – In-Company
- Python for Data Engineers / 2-days – Public & In-Company
This 2-days GoDataDriven training will provide you with the necessary tools to help you turn your code simple, beautiful and truly pythonic.
+ Functional Programmering in Scala / 2-days – Public & In-Company
Medior Data Engineer
Learning Goals for a Medior Data Engineers
- Understands and makes well-reasoned design decisions and trade-offs in their area
- Able to quickly get familiar with larger codebases
- Able to create complex pipelines without guidance
- Apache Airflow/ 1-days – Public & In-Company
This 1-day GoDataDriven training teaches you the internals, terminology, and best practices of writing DAGs. Plus hands-on experience in writing and maintaining data pipelines.
- Building Data Products / 2-days – In-Company
- Apache Spark / 2-days – In-Company
+ Microservices / 2 days – In-Company
+ ElasticSearch / 2 days – In-Company
+ Concurrency in Scala / 2 days – Public & In-Company
Senior Data Engineer
Learning Goals for a Senior Data Engineers
- Go-to expert in one area; understands the broad architecture of the entire system
- Provides technical advice and weighs in on technical decisions that impact other teams or the company at large
- Spark Streaming & Apache Kafka / 2-days – In-Company
+Kubernetes / 2 days – Public & In-Company
+ Apache Flink / 2 days – In-Company
The training gave me a lot of grip and insights on the subject. How to use pandas, cleaning up your data, and plotting data were the most interesting parts for me.
The training did not only provide knowledge about pandas, scikit-learn, but also about the way to think as a data scientist.
The training really starts from scratch, which is a great thing for beginners. The training covers a large range of topics in R, ending with a very interesting section on modelling
I liked every aspect of the training and would like to thank the trainers. They did an excellent job in explaining how to use Spark for data science. This is now the fourth training from GoDataDriven that I followed, they were all great, but this was the best one so far.
Develop the skills of your organization
Find the right courses to grow your team’s Data & AI skills, or design learning journeys at scale to empower your entire organization.
Data Pipelines with Apache Airflow
Yes, we’re book authors too.
Our experienced data engineers Bas Harenslak and Julian de Ruiter explain how to use Apache Airflow to create efficient and automated pipelines. They use their consulting experience from companies like Heineken, Unilever and Booking.com to present relevant use cases and applications.
You will find the following content in the book:
- Framework foundation and best practices
- Airflow’s execution and dependency system
- Testing Airflow DAGs
- Running Airflow in production