LiteLLM - Best as a proxy/router
Unified interface across providers Supports tool_choice parameter Can route to local models (vLLM, Ollama) or cloud APIs Automatic format translation
Run Ollama as the backend, then put LiteLLM in front of it to add proper tool_choice support:
# Terminal 1: Start Ollama
ollama serve
# Terminal 2: LiteLLM proxy
litellm --model ollama/qwen2.5-coder:7b --api_base http://localhost:11434
ensure the model is available in ollama