使用 LangChain 把 PDF 生成音频播客
-
Updated
May 10, 2023 - Jupyter Notebook
使用 LangChain 把 PDF 生成音频播客
RAG implemented from scratch without using LangChain and LangGraph - designed specifically for processing and querying PDF documents with advanced support for visual content like tables, charts, and mathematical formulas.
RAG-based Multi-PDF Question Answering system with semantic search (FAISS) and LLM inference (Groq & Ollama)
Production-ready multilingual RAG system for scientific PDFs. Supports 10+ Indic languages with E5 embeddings, ChromaDB vector store, Gemini 2.5 Flash LLM, and NLLB-200 translation. Ask questions in any language, get accurate answers with citations
AI-powered PDF Q&A chatbot. Upload any document and have a real conversation with it. Built with RAG architecture using LangChain, Groq (Llama 3.3-70B), ChromaDB, and HuggingFace embeddings, completely free to run.
Sleek Streamlit chat app for Google Gemini (Files API). Dark, gradient UI with model picker, usage dialog, file/image/audio/PDF attach & preview, chat history, image persistence, robust error handling, and token usage tracking. Supports streaming replies and modular backend via google-genai.
RAG over PDFs. Ask any document a question, get a grounded answer with source passages cited. Live at inferlens.latentaxis.io
Streamlit-based chatbot to interact with PDFs using Retrieval-Augmented Generation (RAG), FAISS, Sentence Transformers, and Mistral LLM
Enables context-aware question answering over PDFs using retrieval-augmented generation with vector embeddings. Built with Next.js App Router and OpenAI models for low-latency document search and response generation.
A local PDF chatbot that uses RAG to answer questions from a single text-based PDF with citations and out-of-scope detection.
Local PDF Q&A with Ollama
A high-performance Speculative RAG pipeline designed to reduce latency by combining fast draft generation and accurate verification using Groq Llama models, local HuggingFace embeddings, ChromaDB vector search, and end-to-end observability with Langfuse.
Document Q&A system using RAG, FAISS vector search and LLM inference
Chat with annual reports and financial statements using LangChain, Redis Vector Store, and Streamlit — fully Dockerized RAG pipeline
An AI-powered research assistant that answers academic questions from uploaded PDFs or links (arXiv, PubMed) and returns context-rich answers with citation support using LangChain, LLaMA 3 (Groq), and FAISS.
Add a description, image, and links to the pdf-qa topic page so that developers can more easily learn about it.
To associate your repository with the pdf-qa topic, visit your repo's landing page and select "manage topics."