Module 01 · Foundations
Foundations of AI, ML & Data
Build the AI/ML mindset and hands-on Python foundation employers pay premium for.
Start with what AI vs. ML vs. Deep Learning actually means,
then move through the full ML lifecycle — supervised,
unsupervised, and reinforcement learning. Master data
preprocessing, feature engineering, and model evaluation
metrics. Close with real-world Exploratory Data Analysis on
live datasets.
Core Topics
- AI vs. ML vs. Deep Learning
- Supervised, Unsupervised, Reinforcement Learning
- AI/ML Lifecycle & Project Structure
- Data Preprocessing & Feature Engineering
- Model Evaluation: Accuracy, Precision, Recall, F1, ROC-AUC
- Bias, Variance & Overfitting · Ethics in AI
Hands-On Projects
- Python, NumPy, Pandas environment setup
- Exploratory Data Analysis (EDA) on real-world datasets
PythonNumPyPandasJupyter NotebookGoogle ColabEDA
Module 02 · Machine Learning
Machine Learning Algorithms
Build, train, and tune the full spectrum of ML models used in production.
Cover the complete machine learning algorithm toolkit — from
linear and logistic regression through tree-based ensembles,
SVMs, and clustering. Apply gradient boosting with XGBoost and
LightGBM. Build your first customer churn prediction and house
price forecasting models on real datasets.
Core Topics
- Linear, Polynomial, Ridge/Lasso Regression
- Logistic Regression, KNN, Decision Trees, Random Forest
- Gradient Boosting, XGBoost / LightGBM, SVM, Naive Bayes
- K-Means, Hierarchical Clustering, DBSCAN
- PCA & Dimensionality Reduction
- Reinforcement Learning Introduction: Q-Learning
Hands-On Projects
- Customer churn prediction model
- House price prediction model
Scikit-learnXGBoostLightGBMMatplotlibSeaborn
Module 03 · NLP & LLMs
Natural Language Processing (NLP) & LLM Integration
From tokenization to transformers — build applications powered by the world's best LLMs.
Move through text preprocessing, word embeddings, and
Transformer architecture all the way to integrating live LLM
APIs. Use OpenAI and Anthropic APIs to build a chatbot,
document summarizer, and sentiment analyzer. Master prompt
engineering and embeddings for production AI applications.
Core Topics
- Text Preprocessing: Tokenization, Stemming, Lemmatization
- Bag of Words, TF-IDF, N-Grams, Named Entity Recognition
- Word Embeddings: Word2Vec, GloVe
- Transformers & Attention Mechanism · BERT Architecture
- Using LLM APIs: OpenAI, Anthropic, HuggingFace
- Prompt Engineering & Embeddings API
Hands-On Projects
- Build a chatbot using OpenAI/Anthropic API
- Document summarization tool
- Sentiment analyzer
OpenAI APIAnthropic Claude APIHuggingFaceNLTKSpaCy
Module 04 · Agentic AI
Agentic AI — Build Autonomous AI Systems
Build AI agents that reason, plan, and act — the most in-demand AI skill of 2026.
Understand what makes AI systems truly autonomous — reasoning,
planning, self-reflection, and multi-agent coordination. Build
with LangChain, CrewAI, and the OpenAI Assistants API. Deliver
a portfolio of agent projects: an AI research assistant, an
autonomous data analyst, and a multi-agent workflow automation
system.
Core Topics
- Agentic AI: Autonomous Agents, Tool Use in LLMs
- Reasoning, Planning & Self-Reflection
- Multi-Agent Systems
- LangChain Agents · AutoGPT · CrewAI
- OpenAI Assistants API
- Tool Calling, Function Calling & Memory Architectures
Hands-On Projects
- AI research assistant agent
- Autonomous data analyst agent
- Multi-agent workflow automation project
LangChainCrewAIAutoGPTOpenAI Assistants APITool Calling
Module 05 · RAG
Retrieval-Augmented Generation (RAG)
Give LLMs access to your own data — the architecture behind every enterprise AI product.
Understand why RAG exists, how vector embeddings and semantic
search work, and how to architect a full RAG pipeline from
chunking strategy through retrieval and generation. Build
practical applications: a document Q&A chatbot, a company
knowledge assistant, and a PDF chatbot from scratch.
Core Topics
- Why RAG Is Needed & LLM Limitations
- Vector Embeddings & Semantic Search
- Chunking Strategies & Prompt Templates
- RAG Architecture & Pipeline Design
- Vector Databases: FAISS, Pinecone, Weaviate, ChromaDB
Hands-On Projects
- Document Q&A chatbot
- Company knowledge assistant
- PDF chatbot from scratch
LangChainLlamaIndexFAISSPineconeChromaDBWeaviate
Module 06 · Big Data
Big Data for AI
Process data at scale — the infrastructure skill that separates junior from senior AI engineers.
Learn what Big Data means for AI teams — distributed
computing, batch vs. streaming processing, and the tools that
power large-scale ML. Work hands-on with PySpark and Apache
Spark to process large datasets and build ML pipelines that
handle real enterprise data volumes.
Core Topics
- What is Big Data: The 5 Vs
- Data Pipelines & Distributed Computing
- Batch vs. Streaming Processing
- Hadoop Ecosystem · Apache Spark · PySpark · Kafka
- Data Lakes & Feature Pipelines
- Large-Scale ML & Distributed Training
Hands-On Projects
- Process a large dataset using PySpark
- Build an ML pipeline on Apache Spark
PySparkApache SparkKafkaHadoopDatabricks
Module 07 · Cloud & MLOps
Cloud for AI/ML & MLOps
Take models from notebook to production — the skill that turns engineers into senior engineers.
Deploy AI at scale using AWS SageMaker, Azure ML, and Google
Vertex AI. Master Docker, Kubernetes basics, MLflow, and
Kubeflow. Build an end-to-end ML pipeline with CI/CD, model
versioning, and monitoring — graduate with full production
deployment experience on your portfolio.
Core Topics
- Cloud for AI: Scalability, GPU Infrastructure, Model Deployment
- AWS: S3, EC2, SageMaker, Lambda
- Azure: Azure ML, Data Factory
- Google Cloud: Vertex AI, BigQuery
- Docker & Kubernetes Basics
- Kubeflow · MLflow · Model Versioning · CI/CD for ML
Hands-On Projects
- Deploy an ML model as a REST API
- Build an end-to-end ML pipeline
- Model versioning and monitoring setup
AWS SageMakerAzure MLVertex AIDockerMLflowKubeflowGit