berlin dark

Berlin 2024

Agenda

Berlin

Schedule

This year, Ververica and the Apache Flink Community have a lot to celebrate, including the 10 year anniversary of Apache Flink.

By popular demand, Flink Forward Berlin 2024 has been extended to a 4-day program to accommodate the festivities!

Optionally, join the expert-led, 2 days of in-person Apache Flink Bootcamp or the Deep Dive Masterclass Program, followed by 2 full conference days.

There are four Masterclass Deep Dives ; they are  broken into two tracks and participants will be randomly assigned a track to follow ensuring you don’t miss a thing! They are:

  1. Apache Flink: From Data to Intelligence
  2. Flink SQL Origins and Future: Insights from the Original Creator
  3. Bridging Data Silos with Flink CDC
  4. Evolving to a Streaming Lakehouse with Apache Flink

Check the agenda for full details.

The main event includes 2 days packed with exciting content, including sessions selected by the Program Committee and presented by Apache Flink community members in-person from around the world.

Deep Dive Masterclass

Track 1

Track 2

Day 1

Morning session

Apache Flink: From Data to Intelligence

 

Trainers:

Xintong Song, Jun Qin, Ben Gamble

Bridging Data Silos with Flink CDC

 

Trainers:

Leonard Xu, Alexey Novakov, Muhammet Orazov

Day 1

Afternoon session

Flink SQL Origins and Future

 

Trainers:

Lincoln Lee, Lorenzo Affetti, Ahmed Hamdy, Jeyhun Karimov

Evolving to a Streaming Lakehouse with Apache Flink

 

Trainers:

Jingsong Li, Giannis Polyzos

Day 2

Morning session

Bridging Data Silos with Flink CDC

 

Trainers:

Leonard Xu, Alexey Novakov, Muhammet Orazov

Apache Flink: From Data to Intelligence

 

Trainers:

Xintong Song, Jun Qin, Ben Gamble

Day 2

Afternoon session

Evolving to a Streaming Lakehouse with Apache Flink

 

Trainers:

Jingsong Li, Giannis Polyzos

Flink SQL Origins and Future

 

Trainers:

Jark Wu, Lorenzo Affetti, Ahmed Hamdy, Jeyhun Karimov

Deep Dive Masterclass Trainers

 

Leonard Xu Apache Flink PMC Member & Committer
View bio Close bio
Apache Flink PMC Member & Committer, Flink CDC Lead. Work at Alibaba, Focus on Data Engineering and CDC
Xintong Song Apache Flink PMC Member & Committer and Staff Software Engineer
View bio Close bio
Xintong Song is a Staff Software Engineer and leads a team that works in Flink’s distributed execution framework at Alibaba Cloud. He is also an Apache Flink PMC member and committer, the starter and promoter of Flink 2.0. He holds a Ph.D. from Peking University.
Lincoln Lee Apache Flink PMC Member & Committer and Head of Flink SQL
View bio Close bio
Lincoln is a Committer and PMC member of Apache Flink. He is a staff engineer from Alibaba Cloud and has been focused on Flink SQL and stream-batch unification for many years.
Jingsong Li Apache Flink PMC Member & Committer and Staff Engineer
View bio Close bio
Since 2014, he has focused on the research and development of streaming computing within Alibaba. Since 2017, he has focused on the development of Alibaba Blink, and also contributed Apache Flink community actively.

Recently, he mainly focuses on Apache Paimon, a unifying streaming and batch lake storage.
Jun Qin Head of Solutions Architecture
View bio Close bio
Jun Qin is Head of Solutions Architecture at Ververica. Previously, he worked a Solutions Architect & Technical Account Manager at Ververica, as a Technical Account Manager in MapR and as a senior system support programmer/analyst in Amadeus. He has a Ph.D. in computer science and specializes in distributed computing and system architecture.
Giannis Polyzos Staff Streaming Product Architect
View bio Close bio
Giannis is an Engineer and Architect with extensive experience in stream and stream processing systems. Over the years he has worked in architecting streaming data pipelines with technologies like Spark, Kafka, Pulsar, and Flink. These days he focuses on Apache Flink and Streaming Lakehouses.
Alexey Novakov Solution Architect
View bio Close bio
I am a Solution Architect working for last the last 6 years on data solutions and products. At Ververica I am focusing on supporting clients to solve their challenges in adopting data stream processing with Apache Flink. Among my previous project and companies I developed different systems such as Data Lakes, Data Integration and Data Virtualization Layers. In my spare time, I contribute to various open-source projects or start my own for fun. Apart from programming I have some hobbies like astronomy, playing music and gym.
Ben Gamble Field CTO
View bio Close bio
Ben Gamble serves as Field CTO at Ververica. In his role, he focuses on developing messaging and materials to engage with key stakeholders, including clients, prospects, and industry analysts. 
With a background primarily in engineering leadership and entrepreneurship, Ben has extensive experience building and managing engineering teams across various sectors, including logistics, gaming, and mobile applications. His expertise lies in developing solutions centered around real-world interactions, particularly in areas such as GPS-based technologies, augmented reality, and multi-user collaboration.
Outside of his professional life, Ben is an avid technology enthusiast. He enjoys playing and creating video games and trading cards. Ben balances his career with family life, being a father to two young children and a caretaker to three pet ducks.
Muhammet Orazov Senior Software Engineer
View bio Close bio
Experienced in databases, distributed systems and learning about streaming systems at Ververica.
Lorenzo Affetti Senior Software Engineer
View bio Close bio
Lorenzo Affetti works as a Software Engineer enhancing the Ververica product offering to enable users to run and manage their data streaming jobs seamlessly. He contributes to the Apache Flink open source project, a well-known streaming engine used in real-world applications.

After obtaining his PhD in Computer Science in the field of stream processing at Politecnico di Milano, Lorenzo worked at InfluxData developing languages and runtimes for processing time series data. He also worked at Huawei Technologies (Munich Research Center) as a research engineer in the field of object storage cloud services.

His main interest lies in distributed systems for data processing, storage, and retrieval.
Ahmed Hamdy Senior software engineer
View bio Close bio

Software Engineer at Flink Engine team in Ververica taking part in providing core Flink and Connectors features to Ververica Cloud and Ververica Platform users, previously at AWS Managed service for Flink.

Jeyhun Karimov Staff Software Engineer
View bio Close bio
Jeyhun Karimov works as a software engineer at Ververica GmbH. He holds PhD from TU-Berlin, Database Group. His main focuses are query processing/optimization with distributed data processing systems.
Ken Krugler President at Scale Unlimited
View bio Close bio
Ken is a long-time developer, trainer, and open source enthusiast. He is president of Scale Unlimited, a big data consulting company, and he was the Founder & CTO of Krugle, a code search startup. Ken’s open source involvement includes being an Apache Software Foundation (ASF) member and a committer on the Apache Tika project. Most recently he’s presented on Apache Pinot and is a Startree All-Star. Ken lives in California and is a graduate of Massachusetts Institute of Technology (MIT).

Ververica Bootcamp Program

The Ververica Bootcamp Program is an intensive training initiative that transforms Apache Flink users into proficient data processing professionals. By translating complex Flink concepts into practical exercises rooted in real-world scenarios, we empower participants to tackle their toughest data challenges. Leveraging Ververica Cloud services, participants gain a deep understanding of Flink and learn to optimize the scalability and efficiency of their cloud-based solutions. This program is not just about learning; it’s about mastering Apache Flink and leading the future of data processing.

Level Up Your Stream Processing Skills

This intensive, 2-day face-to-face program is designed for Apache Flink users with 2-4 years of experience who want to take their skills to the intermediate level. We'll delve into advanced Flink concepts and techniques, empowering you to build and deploy highly scalable and efficient real-time data processing pipelines. Leveraging Ververica Cloud services, you'll gain a deeper understanding of Flink and explore best practices for production deployments.

Target Audience:

Apache Flink users with 2-4 years of experience who are comfortable with core concepts and want to become proficient in advanced functionalities.

Key Topics

  • Advanced Windowing Operations
  • Time Management Strategies
  • State Management Techniques
  • Serialization Optimization
  • Exactly Once Processing
  • Fault Tolerance
  • Enrichment Techniques
  • Scalability Optimization
  • Flink SQL Functions
  • Table API Features
  • Workflow Design
  • Using Paimon Effectively

Learning Outcomes

Master Advanced Windowing Operations in Apache Flink:

  • Understand and implement session windows, tumbling/sliding windows with triggers, and time management strategies (Event Time, Processing Time, Ingestion Time).

Optimize State Management for High Performance in Flink Applications:

  • Apply advanced state management techniques including state partitioning and RocksDB integration.
  • Optimize state size and access patterns for enhanced performance.

Improve Workflow Performance via Advanced Serialization Techniques:

  • Learn how to reduce time spent serializing and deserializing data, for data sources and sinks (connectors), and over the network.

Deep Dive into Exactly-Once Processing and Failure Recovery:

  • Understand the differences between at-least-once, exactly-once, and exactly-once end-to-end. Learn how to effectively use exactly-once processing when faced with bad data, infrastructure failures, and workflow bugs.

Develop Complex Real-Time Pipelines:

  • Build a workflow that processes a continuous stream of events to generate both dashboard and analytics results.
  • Learn how best to enrich data from a variety of data sources.
  • Optimize complex workflows using pre-filtering, pruning, async I/O, broadcast streams, parallel partial enrichments, and other techniques.

Use Flink SQL & Table APIs to Implement Workflows:

  • Utilize the advanced functionalities of Flink SQL, including UDFs and Table Functions, and master the Flink Table API for unified data transformations and real-time analytics.
  • Compare and contrast the resulting workflow with the Java API.

Designing Optimized Workflows:

  • Learn about situations where splitting a workflow into multiple components improves efficiency and reduces operational complexity.
  • Learn how to use Paimon as an efficient and low-overhead data bridge between workflows.

Interested in being a partner for the
next Flink Forward conference?

Contact us