Overview
In this 8 hr course, you'll delve into the realm of Big Data to gain an in-depth understanding of architecture and pipelines. From ingestion frameworks to storage systems, this course will empower you to design scalable and robust Big Data solutions for real-world scenarios.
What I will be able to do after this course
- Master the fundamentals of Big Data architecture and its key components.
- Gain proficiency in selecting and utilizing Big Data ingestion frameworks such as Kafka and NiFi.
- Develop skills to handle Big Data storage solutions like HBase and Cassandra.
- Understand the principles of data formats and their implications in processing pipelines.
- Acquire the expertise to build comprehensive ETL pipelines leveraging modern frameworks.
Course Instructor(s)
Bhavuk Chawla is a seasoned software engineer with a passion for teaching and simplifying complex technical topics. With years of experience working with cutting-edge Big Data technologies, Bhavuk brings practical insights and hands-on expertise to his courses. His teaching philosophy emphasizes practical skills and problem-solving techniques tailored for real-world applications.
Who is it for?
This course is ideal for software professionals aiming to deepen their knowledge of Big Data systems and methods. It is especially suited for those preparing for certifications like CCA175 or CCA159, as well as engineers seeking to build or optimize Big Data pipelines. The course assumes a foundational understanding of Big Data principles and seeks to elevate learners to an advanced level.