Let's break down a simple RAG application to understand how every component works.
If you know a little bit of Python, you'll leave with a complete understanding of how you can use a basic RAG system to answer questions from a PDF document using an LLM.
To make things a bit more interesting, I'm using Llama 3.1 as the LLM that powers the system. Super cool to learn how to use open-source models and run them locally. I'm writing all of the code on Lightning AI Studios which gives me a persistent, virtual IDE and access to GPUs.
Link to the Lightning Studio: https://lightning.ai/svpino/studios/a...
Link to the GitHub repository: https://github.com/svpino/gentle-intr...
I teach a live, interactive program that'll help you build production-ready Machine Learning systems from the ground up. Check it out here:
https://www.ml.school
To keep up with my content:
• Twitter/X: / svpino
• LinkedIn: / svpino
🔔 Subscribe for more stories: / @underfitted