The AI Engineering Bootcamp, Cohort 8 Course Schedule & Curriculum
The AI Engineering Bootcamp, Cohort 8 Course Schedule & Curriculum
🧑💻 What is “AI Engineering?”
AI Engineering refers to the industry-relevant skills that data science and engineering teams need to successfully build, deploy, operate, and improve Large Language Model (LLM) applications in production environments.
In practice, this requires understanding both prototyping and production deployments.
During the prototyping phase, Prompt Engineering, Retrieval Augmented Generation (RAG), Agents, and Fine-Tuning are all necessary tools to be able to understand and leverage. Prototyping includes:
- Understanding LLM Architecture and Training
- Fine-Tuning LLMs & Embedding Models
- Building RAG Applications
- Building Agent and Multi-Agent Applications
- Deploying LLM Prototype Applications to Users
When productionizing LLM application prototypes, there are many important aspects ensuring helpful, harmless, honest, reliable, and scalable solutions for your customers or stakeholders. Productionizing includes:
- Evaluating RAG and Agent Applications
- Improving Search and Retrieval Pipelines for Production
- Improving Agent and Multi-Agent Applications
- Monitoring Production KPIs for LLM Applications
- Setting up Production Endpoints for Open-Source LLMs and Embedding Models
- Building LLM Applications with Scalable, Production-Grade Components
- Understanding and Building with Agent Protocols
🧑🎓 Ideal Student
This course is designed for:
- Engineers, developers, builders: people coding every day
- People who want to code every day
- Anyone eager to go beyond theory and build real apps
- Hackers who love hands-on, fast-paced learning
This course is not designed for:
- Product Managers who want to remain in their roles
- Executives & leaders who want concepts, not code
- Individuals seeking only theoretical academic deep dives
- People who do not want to build every day
You must be willing to:
- 🚧 Accept that there is no easy way; it is hard and often thankless work to live and work out on the LLM and AI Engineering edge
- ❓ Put down your pride and embrace “I don’t know,” especially when it comes to new concepts and code
- 🕳️ Accept that you will not now or ever know everything about AI Engineering, as it requires you to know everything about AI and everything about Engineering
- 🎓 Set aside any prestige, degrees, and pedigree, and be prepared to demonstrate competence and show proof of work
- 🙏 Trust the process - if you’re here it’s because all of your endless self-studying and following AIE roadmaps hasn’t worked.
What does this look like?
- 🧑💻 Write programming code every single week
- 🤔 Complete a project where you decide what to build and why
- 🧑🤝🧑 Act as a true community member who gives first and tries to make things easy for others
- 🗣️ Participate along with your journey group and peer supporter during each live session
- 🏗️🚢🚀 Build, Ship, AND SHARE your work. This means selling, telling stories, and leading others in your network as the AI Engineering expert.
Whether you’re a former software engineer, data scientist, engineering manager, or even from a non-technical background, we have many stories of incredible folks who have transformed as part of our community.
We’ve been doing this a long time, and invite you to join us as you figure out what the next leg of your career journey looks like, whether that’s a new job, a startup, a side hustle, or simply more meaningful work than you’ve done so far in your career.
Sounds good?
You can jump in and start here to waste minimal time 👇
Complete the AI Engineering Bootcamp Challenge!
Don’t feel ready yet? No worries - check out the recommended prerequisites below.
🤔 Prerequisites
Required
The minimum prerequisite for this course is that you can complete The AI Engineering Bootcamp Challenge. We have complete tutorials on GitHub and walkthroughs on YouTube to help you. You can always use #ask-aim in Discord to get help from the community as well!
This means that you will:
- Set up an interactive development environment (Cursor recommended as of 2025)
- Build your first end-to-end LLM application as a developer (e.g., go Beyond Vibe Coding)
Recommended
We also recommend you get up to speed on important aspects of LLMs and how they’re built. Most importantly, we will use the idea of embeddings when we learn RAG, and we will use Inference throughout the course.
- Learn about Transformers, Attention, Embeddings, Training, and Inference with our 5-day email-based course (released Dec ‘24)
Optional
We have previously open-sourced multiple cohorts that you can use for free to start learning about basic patterns of AI Engineering, including LLM Engineering and LLM Ops.
-
LLM Engineering, Cohort 3 (Nov ‘24 - Dec ‘24)
01: Introduction to LLM Engineering, [Session 1 of Full Course, LLM Engineering Cohort 3]
-
LLM Ops, Cohort 4 (Aug ‘23 - Sep ‘23, a bit dated now, but still useful for context)
LLM Ops: Large Language Models in Production (Cohort 1, Aug-Sep 2023)
-
For Data Scientists: It will probably be useful to go deeper into Python fundamentals. You can use code
AI MAKERSPACEto learn from our friend and fellow creator Eric Riddoch for free about how to Take Python to ProductionTaking Python to Production: A Professional Onboarding Guide
-
For Software Engineers: It will probably be useful to go deeper into Machine Learning fundamentals. Please set aside some time to read and digest this book on Technical Strategy for AI Engineers In the Era of Deep Learning (also available free here).
🏆 Grading and Certification
To become AI-Makerspace Certified, which will open you up to additional opportunities for full and part-time work within our community and network, you must:
-
Complete all project assignments and receive at least an 85% total grade.
Homework project assignments are graded based on the following criteria:
Criteria Description Points Code Code runs from start to finish without errors and produces the expected result. 10 Answers Answers to each question in the assignment notebook are accurate and demonstrate a deep understanding of the concept. Note: The number of questions may vary from project to project | 5 | | Presentation | Loom video (or alternative) walkthrough of your notebook (~5min). | 5 | | Total | | 20 |
- The previous week's Homework is due every Tuesday before class starts.
- Late Submission Policy: Each day an assignment is submitted past the deadline, a penalty of 20% of the total assignment score will be applied.
-
Complete the Certification Challenge within the required time frame, which will be assigned halfway through the cohort.
-
Complete a final project and present during Demo Day.
- Demo Day project requirements include:
- Participating live in Demo Day.
- Final versions of a sharable GitHub repo and slide deck, to be posted at the same time as your Demo Day video on AI Makerspace’s YouTube.
- Demo Day project requirements include:
📅 Detailed Course Schedule & Curriculum
🏗️ Prototyping (Build)
Week 1: Introduction, Vibe Check, and RAG
| 📚 Curriculum | 🧑💻 Assignment | 🧰 Tools |
|---|---|---|
| Live Session 1: 🤖 Introduction & Vibe Check |
- Understand course structure
- How to succeed as a certified AI Engineer on Demo Day!
- Meet your cohort, peer supporters, and journey group!
- LLM prototyping best practices, how to spin up quick end-to-end prototypes and vibe check them! | Vibe Checking The AIE Challenge
🚧 Advanced Build: Make improvements to improve the vibes, reevaluate | LLM: OpenAI GPT models UI: Vibe Coded w/ React Deployment: Vercel
Relevant papers
- Chain-of-Thought
- Principled Instructions
- LMs are few-shot learners | | **Live Session 2: 🗃️ Embeddings and Retrieval Augmented Generation (RAG)
-** Prompt Engineering best practices and the LLM Application Stack
- Understand embedding models and similarity search
- Understand Retrieval Augmented Generation = Dense Vector Retrieval + In-Context Learning
- Build a Python RAG app from scratch | Building a Pythonic RAG App from Scratch
🚧 Advanced Build: Add one or more optional ”extras” to the RAG pipeline | LLM: OpenAI GPT models Embedding Model: OpenAI embeddings Orchestration: OpenAI Python SDK
Relevant papers/blogs
Week 2: Production-Grade RAG Apps
| 📚 Curriculum | 🧑💻 Assignment | 🧰 Tools |
|---|---|---|
| Live Session 3: 🚀 **Industry Use Cases & End-to-End RAG |
-** Understand the state of production LLM application use cases in industry
- Ideate with peers & peer supporters
- Build an end-to-end RAG application using everything we’ve learned so far | Deploy an E2E Pythonic RAG Application
🚧 Advanced Build: Determine a specific use-case for RAG, and adapt your challenge to that new use case. | LLM: OpenAI GPT models, Anthropic Claude Embedding Model: OpenAI embeddings Orchestration: OpenAI Python SDK User Interface: Vibe Coded w/ React Deployment: Vercel
Relevant papers/blogs
-
Identifying and Scaling Use Cases (April 2025) | | Live Session 4: ⛓️ Production-Grade RAG with LangGraph
-
Why LangChain, OpenAI, QDrant, LangSmith
-
Understand LangChain & LangGraph core constructs
-
Introduce primary cohort use case
-
Build a RAG system with LangChain and Qdrant
-
Intro to LangSmith for evaluation and monitoring | 1. Build a production-grade RAG application with LangGraph
🚧 Advanced Build: Extending the Graph with Complex Flows
- Evaluate your RAG application using off-the-shelf and custom evaluators in LangSmith | LLM: OpenAI GPT models Embedding Model: OpenAI embeddings Orchestration: LangChain & LangGraph Vector Database: QDrant Evaluation & Monitoring: LangSmith
Relevant papers/blog
- The rise of “context engineering” (June 2025)
- Is LangGraph Used in Production? (February 2025) |
Week 3: Agents, Multi-Agent Systems, and Context Engineering
| 📚 Curriculum | 🧑💻 Assignment | 🧰 Tools |
|---|---|---|
| Live Session 5: 🕴️ **Production-Grade Agents with LangGraph |
-** Answer the question: “What is an agent?”
- Understand how to build production-grade agent applications using LangGraph
- Understand Context Engineering from first principles
- How to use LangSmith to evaluate agentic RAG applications | Build and evaluate a your first Agent (e.g., agentic RAG) application!
🚧 Advanced Build: Create an agent with 3 tools that can research a specific topic of your choice. Deploy the application end-to-end with a vibe-coded front end | LLM: OpenAI GPT models Embedding Model: OpenAI embeddings Orchestration: LangChain & LangGraph Vector Database: QDrant Function Calling: OpenAI Tools Evaluation & Monitoring: LangSmith
Relevant papers
-
How to think about agent frameworks (April, 2025)
-
Context Engineering (for agents) (July 2025) | | Live Session 6: 🔄 Multi-Agent Applications
-
Understand what multi-agent systems are and how they operate.
-
Extend the primary cohort use case to a multi-agent solution ****- Build a production-grade multi-agent applications using LangGraph | Building a multi-agentic LangGraph application that allows us to separate search and retrieval (Research Team) from generating the final output (Document Writing Team)
🚧 Advanced Build: Build a graph to produce a social media post about a given Machine Learning Paper. | LLM: OpenAI GPT models Embedding Model: OpenAI embeddings Orchestration: LangChain & LangGraph Vector Database: QDrant Function Calling: OpenAI Tools
Relevant papers/blogs
- Don’t Build Multi-Agents (June 2025)
- How we built our multi-agent research system (June 2025)
- Zero to One: Learning Agentic Patterns (May 2025) |
Week 4: RAG & Agent Evaluation with Synthetic Data
| 📚 Curriculum | 🧑💻 Assignment | 🧰 Tools |
|---|---|---|
| Live Session 7: 🪄 **Synthetic Data Generation for Evaluation |
-** An overview of Synthetic Data Generation (SDG)
- How to use SDG for Evaluation
- Generating high-quality synthetic test data sets for RAG applications in general and for our primary cohort use case specifically
- How to use LangSmith to baseline performance, make improvements, and then compare | Create synthetic test data generation for RAG evaluation, load into LangSmith, and evaluate RAG against synthetic test data
🚧 Advanced Build: Reproduce the RAGAS Synthetic Data Generation Steps - but utilize a LangGraph Agent Graph, instead of the Knowledge Graph approach. | LLM: OpenAI GPT models Embedding Model: OpenAI embeddings Orchestration: LangChain & LangGraph Vector Database: QDrant Evaluation: RAG ASessment, LangSmith
Relevant papers/blogs - All about synthetic data generation (Nov 2024)
-
Mastering LLM Techniques: Evaluation (Jan 2025) | | Live Session 8: 📊 RAG and Agent Evaluation
-
Build RAG and Agent applications with LangGraph
-
Evaluate RAG and Agent applications quantitatively with the RAG ASsessment (RAGAS) framework
-
Use metrics-driven development to improve agentic applications, measurably, with RAGAS | 1. Build and evaluate RAG application with RAGAS RAG metrics
- Build and evaluate ReAct agent application with RAGAS agent metrics | LLM: OpenAI GPT models Embedding Model: OpenAI embeddings Orchestration: LangChain & LangGraph Vector Database: QDrant Function Calling: OpenAI Tools Evaluation: RAGAS
Relevant papers
- RAGAS (2023) |
Week 5: Advanced Retrieval and Agentic Reasoning
| 📚 Curriculum | 🧑💻 Assignment | 🧰 Tools |
|---|---|---|
| Live Session 9: 🐕 Advanced Retrieval Strategies for RAG Apps |
- Understand how advanced retrieval and chunking techniques can enhance RAG
- Compare the performance of retrieval algorithms for RAG
- Understand the fine lines between chunking, retrieval, and ranking for our primary cohort use case
- Learn best practices for retrieval pipelines | Build a RAG application and evaluate different retrieval strategies
🚧 Advanced Build: Implement RAG-Fusion using the LangChain ecosystem. | Our Standard RAG Stack for Building and Evaluating Apps 👆
Relevant papers BM25 Reciprocal Rank Fusion | | Live Session 10: 🧠 **Advanced Agentic Reasoning
-** Discuss best-practice use of reasoning models
- Understand planning and reflection agents
- Build an Open-Source Deep Research agent application using LangGraph
- Investigate evaluating complex agent applications with the latest tools | Build and evaluate an unrolled version of Open Deep Research from LangGraph | Our Standard Agent Stack for Building and Evaluating Apps👆
Relevant Papers CoT Prompting Self-Refine Reflexion Scaling Test-Time Compute |
Week 6: Certification Challenge and Another Agent Framework
| 📚 Curriculum | 🧑💻 Assignment | 🧰 Tools | 🎯 Demo Day |
|---|---|---|---|
| Live Session 11: 🧑🎓 **Certification Challenge |
-** Introduce the Certification Challenge
- Pitch your problem, audience, and solution
- Network with people outside of your group | The Certification Challenge
- Define Problem & Audience
- Propose Solution
- Deal with Data
- Build E2E Agentic RAG Prototype
- Create Golden Test Data Set
- Assess Performance | Up to you! | Breakout Room: Demo Day Project Pitches, within and across journey groups. | | Live Session 12: 🤖 OpenAI Agents SDK
- Understand the suite of tools for building agents with OpenAI and the evolution of their tooling
- Core constructs of the Agents SDK and comparison to other agent frameworks
- How to use monitoring and observability tools on the OpenAI platform | [Extra Credit] Building and evaluate an a multi-agent Research Bot with a Planner Agent, Search Agent, and Writer Agent | LLM: OpenAI GPT models Embedding Model: OpenAI embeddings Orchestration: Agents SDK Function Calling: OpenAI Tools Evaluation & Monitoring: OpenAI Platform
Relevant blog New Tools for Building Agents | |
🚢 Production (Ship)
Week 7: Model Context Protocol & Agent Ops
| 📚 Curriculum | 🧑💻 Assignment | 🧰 Tools |
|---|---|---|
| Live Session 13: 🌀 Model Context Protocol (MCP) |
- Understand Model Context Protocol (MCP) from client and server sides
- Learn MCP resource types and how to design useful MCP servers
- Build an MCP server that enhances allows us to enhance our search and retrieval toolbox for our primary cohort use case | Build an MCP server filled with custom resources and leverage it in a simple agent application | Our Standard Agentic RAG Stack for Building and Evaluating Apps 👆
Relevant Blogs/Specifications - MCP Announcement (Nov 2025)
-
Model Context Protocol | | Live Session 14: 🚢 Deploying Agents to Production
-
Deploy your applications to APIs directly via LangGraph Platform
-
Deploy an agent with tool and data access for our primary cohort use case
-
Learning how to monitor, visualize, debug, and interact with your LLM applications with LangSmith and LangGraph Studio | Deploy our production-grade LangGraph agent (with access to a tool and data) to an API endpoint using LangGraph Server
🚧 Advanced Build: Provide tool and data resources through an MCP server | *Our Standard LangX Agentic RAG Stack for Building and Evaluating Apps👆
+* Monitoring: LangSmith **Deployment: LangGraph Platform / LangGraph Server Visualization, Interaction, and Debugging: LangGraph Studio | | | | |
Week 8: Agent2Agent Protocol, Guardrails, and Caching
| 📚 Curriculum | 🧑💻 Assignment | 🧰 Tools |
|---|---|---|
| Live Session 15: 🔀 Agent2Agent (A2A) Protocol & Agent Ops |
- Defining LLM Operations (LLM Ops) & Agent Operations (Agent Ops)
- Understand Agent2Agent Protocol (A2A), including how remote agents and client agents interact
- How to enable agent-to-agent communication with Agent cards and MCP
- Deploy an agent for our primary cohort use case and then build another agent to act as a user of our application
| Build an agent that acts as a user of our agent application, and using our A2A system to mimic usage of real application | Our Standard Agentic RAG Stack for Building and Evaluating Apps 👆
Relevant Blogs/Specifications - A2A Announcement (April 2025)
-
Agent2Agent Protocol | | Live Session 16: 🛤️ Guardrails & Caching
-
Understand guardrails, including the key categories of guardrails
-
Understand the importance of caching
-
How to use Prompt and Embedding caching
-
Apply guardrails and caching directly to our primary cohort use case | Build guardrails and set up semantic caching for our agent agent application | Our On-Prem LangX Agentic RAG Stack👆
****+ Python
Relevant Papers & Blogs - The AI Guardrails Index (February 2025)
- Caching (July 2025) |
Week 9: Deploying Open-Source Endpoints and On-Prem Agents
| 📚 Curriculum | 🧑💻 Assignment | 🧰 Tools |
|---|---|---|
| Session 17: 🔓 Open-Source Model Endpoints |
- Understand how to deploy open LLMs and embeddings to scalable endpoints
- Discuss how to choose inference server
- Build an E2E enterprise agentic RAG application with LCEL | 1. Deploy Open LLM and Embedding Model Endpoints
- Build and deploy an E2E Agentic RAG Application | *Our Standard LangX Agentic RAG Stack for Building and Evaluating Apps 👆
+* Open LLM: Llama-3.3-70B-Instruct-Turbo Open Embeddings: llama-3.2-nv-embedqa-1b-v2 or from MTEB LLM Serving & Inference: Together AI
Relevant papers MTEB AWQ | | Session 18: 🏢 On-Prem Agent Applications
- Introduction to Building On-Prem
- Hardware & compute Considerations
- Local LLM & Embedding Model Hosting Comparison
- How to build and present an On-Prem Solution to stakeholders | Building On-Prem Agents with LangGraph, LangGraph Server, and ollama
🚧 Advanced Build: Implement Semantic Caching with Redis (or similar solution), and add tracing through LangSmith, or WandB. | Our Production LangX Agentic RAG Stack👆
LLM Serving & Inference: ollama | | | | |
🚀 Demo Day (Share)
Week 10: Demo Day
| 📚 Curriculum |
|---|
| Session 19: Demo Day Dress Rehearsal |
- Code Freeze
- Full Dress Rehearsal | | Session 20: AI Makerspace Demo Day & Graduation!
- All are welcome! This event is open to the public! |