Tag Archives: llm

Set Ollama default context size on Linux

Just as simply as this: Then add the following lines (to set to 64k tokens context size): Save and exit, then restart Ollama: And you’re done: Note: Tested on Ubuntu 24.04 with Ollama 0.15 That’s it, Enjoy!

Posted in Ollama | Tagged , , , | Leave a comment

How to Tame Your Dragon, aka LLM!

I’m planning to create a series of articles/videos on how to effectively “tame” an LLM to enforce structured output for programmatic use. The goal is to make AI responses predictable, reliable, and easily consumable by applications. What do you think?

Posted in Linux | Tagged | Leave a comment