A multi-model conversational AI chatbot built with Streamlit, LangChain, LangGraph, and FastAPI. This project allows users to chat with various LLMs (Groq, Gemini, etc.) and enable advanced tools like Web Search, Wikipedia, and Python REPL.
| Feature | Description |
|---|---|
| Multi-Model Support | Native support for Groq LLaMA3, Google Gemini, and more. |
| Advanced Tools | Integrated Web Search (Tavily), Wikipedia, and Python REPL. |
| Smart Orchestration | Powered by LangGraph for robust tool-use decision logic. |
| Optimized Performance | History sliding window & recursion limits for cost/speed efficiency. |
| Customization | Easily tailor system prompts and add custom tools in tools.py. |
| Tool/Library | Purpose |
|---|---|
| Python | Core logic and agent framework. |
| Streamlit | Sleek, interactive frontend for the chat interface. |
| FastAPI | High-performance backend API for model orchestration. |
| LangChain | The backbone for LLM integration and chaining. |
| LangGraph | State-of-the-graph logic for complex agentic workflows. |
| Docker | Containerization for easy deployment. |
git clone https://github.com/Balaji-R-05/ai-agent-chatbot.git
cd ai-agent-chatbotYour .env should look like this:
# API Keys
GROQ_API_KEY=your_groq_key
TAVILY_API_KEY=your_tavily_key # For Web Search
# Configuration
MAX_HISTORY=10
RECURSION_LIMIT=10 # Max steps before the agent stopsThe easiest way to run the application is using Docker Compose:
docker-compose up --buildThis will start both the backend (FastAPI) and the frontend (Streamlit).
python -m venv venv
.\venv\Scripts\activate
pip install -r requirements.txtStart both services using the batch script:
.\run.batAlternatively, start them separately:
# Terminal 1: Backend
uvicorn main:app --port 8000
# Terminal 2: Frontend
streamlit run client/app.py- API Documentation: http://127.0.0.1:8000/doc
- Streamlit App: http://127.0.0.1:8501
Developed by Balaji-R-05




