A scalable system for managing, monitoring, and orchestrating AI agents built with FastAPI, LangChain, and SQLite.
-
Agent Management
- Create and configure AI agents
- Start/stop agent execution
- Monitor agent status and performance
- Real-time updates via WebSocket
-
Task Execution
- Assign tasks to agents
- Track task progress
- Store execution history
- Handle errors and retries
-
Resource Management
- Database persistence
- Memory management
- Tool integration
- State tracking
- Backend: FastAPI
- Database: SQLite
- AI Framework: LangChain
- LLM Integration: OpenAI
- API Documentation: Swagger/OpenAPI
- Python 3.9+
- OpenAI API key
- Virtual environment management
- Clone the repository:
git clone https://github.com/yourusername/ai-agent-manager.git
cd ai-agent-manager
- Create and activate virtual environment:
# Create virtual environment
python3 -m venv venv
# Activate (Mac/Linux)
source venv/bin/activate
# Activate (Windows)
# venv\Scripts\activate
- Install dependencies:
pip install -r requirements.txt
- Create
.env
file:
# .env
OPENAI_API_KEY=your_api_key_here
DATABASE_URL=sqlite:///agents.db
API_HOST=127.0.0.1
API_PORT=8000
- Start the server:
python run.py
- Access the API documentation:
http://127.0.0.1:8000/docs
POST /api/v1/agents/
- Create a new agentPOST /api/v1/agents/{agent_id}/start
- Start an agentPOST /api/v1/agents/{agent_id}/stop
- Stop an agentGET /api/v1/agents/{agent_id}
- Get agent status
POST /api/v1/agents/{agent_id}/tasks
- Execute a task
WS /api/v1/ws/agents/{agent_id}
- Real-time agent updates
# Create an agent
POST /api/v1/agents/
{
"name": "research_agent",
"config": {
"model_name": "gpt-3.5-turbo",
"temperature": 0.7
}
}
# Execute a task
POST /api/v1/agents/{agent_id}/tasks
{
"type": "research",
"input": "What are the latest developments in AI?"
}
ai-agent-manager/
├── src/
│ ├── api/ # API endpoints and routing
│ ├── config/ # Configuration and settings
│ ├── core/ # Core business logic
│ └── database/ # Database models and setup
├── tests/ # Test files
├── .env # Environment variables
├── requirements.txt # Python dependencies
└── run.py # Application entry point
# Run all tests
pytest
# Run specific test file
pytest tests/test_agent_manager.py
# Run with coverage
pytest --cov=src tests/
- All endpoints require authentication (TODO)
- API keys are managed via environment variables
- Database connections are properly managed
- Input validation on all endpoints
- Fork the repository
- Create a feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Add authentication and authorization
- Implement rate limiting
- Add more agent tools and capabilities
- Enhance monitoring and analytics
- Add support for multiple LLM providers
- Implement agent collaboration features
- Currently only supports single-user mode
- Limited to OpenAI's API
- Basic error handling
For support, please open an issue in the GitHub repository or contact the maintainers.
Made with ❤️ by [Your Name]