Agentic AI

This course provides hands-on training in Agentic AI and Large Language Models (LLMs), focusing on building intelligent agents, conversational pipelines, retrieval-augmented systems, and multi-agent workflows. Learners will gain practical skills in LangChain, LangGraph, CrewAI, and AutoGen, along with deployment, security, and ethical practices, preparing them for real-world AI engineering roles.

Course Overview

Foundations of Agentic AI and LLMs

  • Introduction to Agentic AI

  • Agentic AI vs. Generative AI

  • Types of agents: reactive, proactive, collaborative

  • LLM architecture: tokens, embeddings, transformers

  • Setting up your development environment (Python, OpenAI API)

  • Mini Project: Build your first conversational AI agent

Conversational Pipelines and RAG (Retrieval-Augmented Generation)

  • NLP pipelines and conversation design

  • Prompt engineering and prompt chaining

  • Embeddings, vector databases (FAISS, Pinecone)

  • Retrieval-based augmentation of LLM outputs

  • Project: Create a Q&A system over documents using RAG

LangChain Fundamentals & Tool Integration

  • Understanding chains, memory, tools, and agents in LangChain

  • Integrating external APIs and functions

  • Structured outputs with output parsers

  • Prompt templates and memory stores

  • Project: Build a LangChain-based productivity assistant

LangGraph and Stateful Agent Workflows

  • Introduction to LangGraph for multi-step workflows

  • Managing agent state, transitions, and memory

  • Error handling and response strategies

  • Project: Design a finance assistant with LangGraph workflows

Multi-Agent Systems with CrewAI and AutoGen

  • Multi-agent collaboration frameworks

  • Role-based delegation and task management (CrewAI)

  • Building reflexive agents using AutoGen

  • Agent feedback loops, self-evaluation, and decision logic

  • Project: Implement a writing & editing agent team

Deployment and Hosting of Agentic Systems

  • Introduction to OpenAI Agents SDK & function calling

  • Hosting agents with FastAPI and Streamlit

  • Cloud integrations (AWS, GCP, Azure)

  • Monitoring and observability (LangFuse, Portkey)

  • Project: Deploy and test your personal AI assistant online

Security, Ethics, and Trust in Agentic Systems

  • TRiSM framework (Trust, Risk, Security Management)

  • Data privacy and model security

  • Handling hallucinations, bias, and misuse

  • Ethical agent design and governance

  • Assignment: Perform a risk audit for your deployed agent

Capstone Project & Future Directions

  • Capstone Project: Design, implement, and demo a complete agent system (e.g., Legal researcher, CRM assistant, AI tutor)

  • Advanced concepts: self-evolving agents, multimodal agentic AI, long-term memory

  • Career pathways: Agent Developer, AI Workflow Engineer, LLMOps Specialist

 

Course Outcome

  • Build and deploy intelligent Agentic AI systems using LLMs.

  • Design conversational pipelines, RAG, and multi-agent workflows.

  • Apply LangChain, LangGraph, CrewAI, and AutoGen for real-world use cases.

  • Ensure secure, ethical, and scalable AI agent deployment.

  • Develop a capstone project ready for industry applications.