Simply run it as follows:
OLLAMA_DEBUG=1 ollama serve
Note: You might want to export another variable to show the ollama serve command where the models are saved in case you receive a 404 about model not available, this happened to me for some reason on Linux only.
In this case, just export these two variables and then run the command:
export OLLAMA_MODELS=/usr/share/ollama/.ollama/modelsexport OLLAMA_DEBUG=1ollama serve
That’s it, Enjoy!