Learn to create Real-time Stream Processing applications using Apache Spark
What you’ll learn
Real-time Stream Processing Concepts
Spark Structured Streaming APIs and Architecture
Working with File Streams
Working With Kafka Source and Integrating Spark with Kafka
State-less and State-full Streaming Transformations
Windowing Aggregates using Spark Stream
Watermarking and State Cleanup
Streaming Joins and Aggregation
Handling Memory Problems with Streaming Joins
Creating Arbitrary Streaming Sinks
Spark Fundamentals and exposure to Spark Dataframe APIs
Kafka Fundamentals and working knowledge of Apache Kafka
Programming Knowledge Using Python Programming Language
A Recent 64-bit Windows/Mac/Linux Machine with 8 GB RAM
About the Free
I am creating Apache Spark 3 – Real-time Stream Processing using the Python Free to help you understand the Real-time Stream processing using Apache Spark and apply that knowledge to build real-time stream processing solutions. This Free is example-driven and follows a working session like approach. We will be taking a live coding approach and explain all the needed concepts along the way.
Who should take this Free?
I designed this Free for software engineers willing to develop a Real-time Stream Processing Pipeline and application using the Apache Spark. I am also creating this Free for data architects and data engineers who are responsible for designing and building the organization’s data-centric infrastructure. Another group of people is the managers and architects who do not directly work with Spark implementation. Still, they work with the people who implement Apache Spark at the ground level.
Spark Version used in the Free
This Free is using the Apache Spark 3.x. I have tested all the source code and examples used in this Free on Apache Spark 3.0.0 open-source distribution.
Who this Free is for:
- Software Engineers and Architects who are willing to design and develop a Bigdata Engineering Projects using Apache Spark
- Programmers and developers who are aspiring to grow and learn Data Engineering using Apache Spark