Ollama fails to load model deepseek-r1:14b. deepseek-r1:8b and deepseek-r1:32b do work flawlessly. -> RAM + VRAM sufficient for 14b. LLama Server becomes unresponsive and crashes when trying to load ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results