When you enroll through our links, we may earn a small commission—at no extra cost to you. This helps keep our platform free and inspires us to add more value.

Udemy logo

Real Time Spark Project for Beginners: Hadoop, Spark, Docker

Building Real Time Data Pipeline Using Apache Kafka, Apache Spark, Hadoop, PostgreSQL, Django and Flexmonster on Docker

     
  • 5
  •  |
  • Reviews ( 95 )
₹519

This Course Includes

  • iconudemy
  • icon5 (95 reviews )
  • icon6h 34m
  • iconenglish
  • iconOnline - Self Paced
  • iconprofessional certificate
  • iconUdemy

About Real Time Spark Project for Beginners: Hadoop, Spark, Docker

In many data centers, different type of servers generate large amount of data(events, Event in this case is status of the server in the data center) in real-time.

There is always a need to process these data in real-time and generate insights which will be used by the server/data center monitoring people and they have to track these server's status regularly and find the resolution in case of issues occurring, for better server stability.

Since the data is huge and coming in real-time, we need to choose the right architecture with scalable storage and computation frameworks/technologies.

Hence we want to build the Real Time Data Pipeline Using Apache Kafka, Apache Spark, Hadoop, PostgreSQL, Django and Flexmonster on Docker to generate insights out of this data.

The Spark Project/Data Pipeline is built using Apache Spark with Scala and PySpark on Apache Hadoop Cluster which is on top of Docker.

Data Visualization is built using Django Web Framework and Flexmonster.

Apache Spark is an open-source unified analytics engine for large-scale data processing. Spark provides an interface for programming clusters with implicit data parallelism and fault tolerance. Apache Kafka is a distributed event store and stream-processing platform. It is an open-source system developed by the Apache Software Foundation written in Java and Scala. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds. Apache Hadoop is a collection of open-source software utilities that facilitates using a network of many computers to solve problems involving massive amounts of data and computation. It provides a software framework for distributed storage and processing of big data using the MapReduce programming model. A NoSQL (originally referring to "non-SQL" or "non-relational") database provides a mechanism for storage and retrieval of data that is modeled in means other than the tabular relations used in relational databases.

What You Will Learn?

  • Complete Development of Real Time Streaming Data Pipeline using Hadoop and Spark Cluster on Docker .
  • Setting up Single Node Hadoop and Spark Cluster on Docker .
  • Features of Spark Structured Streaming using Spark with Scala .
  • Features of Spark Structured Streaming using Spark with Python(PySpark) .
  • How to use PostgreSQL with Spark Structured Streaming .
  • Basic understanding of Apache Kafka .
  • How to build Data Visualisation using Django Web Framework and Flexmonster .
  • Fundamentals of Docker and Containerization.