ollama
A reference page for the ollama resource
The ollama resource installs Ollama, a runtime for running large language models locally. On macOS it is installed via Homebrew and started as a background service; on Linux the official install script is used, which registers a systemd service automatically.
Parameters:
- models: (array[string]) AI models to pull and keep installed. Model names match those listed in the Ollama library (e.g.
"llama3.2","mistral","qwen2.5-coder:7b"). Codify adds models that are missing and removes models that are no longer listed.
Example usage:
Install Ollama with a single model
[
{
"type": "ollama",
"models": ["llama3.2"]
}
]Install Ollama with multiple models
[
{
"type": "ollama",
"models": ["llama3.2", "mistral", "qwen2.5-coder"]
}
]Install Ollama without pulling any models
[
{
"type": "ollama"
}
]Notes:
- On macOS, Homebrew must be installed before applying the ollama resource. The homebrew resource can install it.
- On Linux, the official install script (
https://ollama.com/install.sh) requirescurlandsudoprivileges. The script creates anollamasystem user and registers a systemd service. - Models can be large (several gigabytes each). Make sure you have sufficient disk space before adding them to your configuration.
- To see available model names and tags, visit ollama.com/library or run
ollama listafter installation. - Removing the ollama resource stops and uninstalls the Ollama binary but does not automatically remove downloaded model data (stored in
~/.ollamaon macOS or/usr/share/ollamaon Linux for system-level data).